Skip to content

Advertisement

  • Research
  • Open Access

On complete moment convergence for arrays of rowwise pairwise negatively quadrant dependent random variables

Journal of Inequalities and Applications20192019:46

https://doi.org/10.1186/s13660-019-1995-9

  • Received: 24 April 2018
  • Accepted: 11 February 2019
  • Published:

Abstract

In this paper, we establish some results on the complete moment convergence for weighted sums of pairwise negatively quadrant dependent (PNQD) random variables. The obtained results improve the corresponding ones of Ko (Stoch. Int. J. Probab. Stoch. Process. 85:172–180, 2013).

Keywords

  • Pairwise negatively quadrant dependent
  • Complete moment convergence
  • Weighted sums

MSC

  • 60F15

1 Introduction

The following concept of PNQD random variables was introduced by Lehmann [2].

Definition 1.1

A sequence \(\{X_{n}, n\geq 1\}\) of random variables is said to be pairwise negatively quadrant dependent (PNQD) if for any \(r_{i}\), \(r_{j}\) and \(i\neq j\),
$$ P(X_{i}>r_{i}, X_{j}>r_{j})\leq P(X_{i}>r_{i})P(X_{j}>r_{j}). $$

Negative quadrant dependence is shown to be a stronger notion of dependence than negative correlation but weaker than negative association. The convergence properties of NQD random sequences have been studied in many papers. We refer to Wu [3] for Kolmogorov-type three-series theorem, Matula [4] for the Kolmogorov-type strong law of large numbers, Jabbari [5] for the almost sure limit theorems for weighted sums of pairwise NQD random variables under some fragile conditions, Li and Yang [6], Wu [7], and Xu and Tang [8] for strong convergence, Gan and Chen [9] for complete convergence and complete moment convergence, Wu and Guan [10] for a mean convergence theorem and weak laws of large numbers for dependent random variables, and so on.

The concept of complete convergence of a sequence of random variables was first given by Hsu and Robbins [11].

Definition 1.2

A sequence of random variables \(\{U_{n}, n \in N\}\) is said to converge completely to a constant a if for any \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty }P\bigl( \vert U_{n}-a \vert >\varepsilon \bigr)< \infty . $$

By the Borel–Cantelli lemma this result implies that \(U_{n}\rightarrow a\) almost surely. Therefore, the complete convergence is a very important tool in establishing the almost sure convergence of sums of random variables and weighted sums of random variables.

Recently, Ko [1] proved the following complete convergence theorem for arrays of PNQD random variables.

Theorem A

Let \(\{X_{nj}, 1\leq j\leq b_{n}, n\geq 1\}\) be an array of rowwise and PNQD random variables with mean zero, and let \(\{a_{nj}, j\geq 1, n\geq 1\}\) be an array of positive numbers. Let \(\{b_{n}, n\geq 1\}\) be a nondecreasing sequence of positive numbers. Assume that, for some \(0< t<2\) and all \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} P\bigl( \vert a _{nj}X_{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr)< \infty $$
(1.1)
and
$$ \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} a_{nj}^{2} E(X_{nj})^{2} I\bigl( \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr)< \infty . $$
(1.2)
Then
$$ \sum_{n=1}^{\infty } c_{n} P \Biggl\{ \max_{1\leq k \leq b_{n}} \Biggl\vert \sum_{j=1}^{k} \bigl(a_{nj}X_{nj}-a_{nj}EX_{nj} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr] \bigr) \Biggr\vert \geq \varepsilon b _{n}^{\frac{1}{t}} \Biggr\} < \infty . $$
(1.3)

Chow [12] was the first who showed the complete moment convergence for a sequence of independent and identically distributed random variables by generalizing the result of Baum and Katz [13]. The concept of complete moment convergence is as follows.

Definition 1.3

Let \(\{Z_{n}, n\geq 1\}\) be a sequence of random variables, and let \(a_{n}> 0\), \(b_{n}> 0\), and \(q> 0\). If for any \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty }a_{n}E\bigl\{ b_{n}^{-1} \vert Z_{n} \vert -\varepsilon \bigr\} _{+}^{q}< \infty , $$
then this is called the complete moment convergence.

It is easily seen that complete moment convergence is stronger than complete convergence. There are many papers on complete moment convergence; see, for example, Sung [14] for independent random variables, Wang and Hu [15] for the maximal partial sums of a martingale difference sequence, Shen et al. [16] for arrays of rowwise negatively superadditive dependent (NSD) random variables. Wu et al. [17] for arrays of rowwise END random variables, Wu [7] for negatively associated random variables, Wu et al. [18] for weighted sums of weakly dependent random variables, Wang et al. [19] for double indexed randomly weighted sums and its applications, Wu and Wang [20] for a class of dependent random variables, and so forth.

In this work, we improve Theorem A from complete convergence to complete moment convergence for PNQD random variables under some stronger conditions. In addition, we obtain some much stronger conclusions under the same conditions of the corresponding theorems in Ko [1].

Throughout this paper, the symbol C always stands for a generic positive constant which may differ from one place to another. By \(I(A)\) we denote the indicator function of a set A. We also denote \(x_{+}=xI(x\geq 0)\).

2 Main results

Now we state the main results of this paper. The proofs are given in next section.

Theorem 2.1

Let \(\{X_{nj}, 1\leq j\leq b_{n}, n\geq 1\}\) be an array of rowwise and PNQD random variables with mean zero, and let \(\{a_{nj}, j\geq 1, n\geq 1\}\) be an array of positive numbers. Let \(\{b_{n}, n\geq 1\}\) be a nondecreasing sequence of positive numbers. Assume that, for some \(0< t<2\) and all \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{1}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr) < \infty $$
(2.1)
and
$$ \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} a_{nj}^{2} E(X_{nj})^{2} I\bigl( \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr)< \infty . $$
(2.2)
Then, for all \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty } c_{n} E \Biggl\{ b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} \bigl(a_{nj}X_{nj}-a _{nj}EX_{nj} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr] \bigr) \Biggr\vert - \varepsilon \Biggr\} _{+}< \infty . $$
(2.3)

Remark 2.1

Let \(H_{nj}=\sum_{j=1}^{k} (a_{nj}X_{nj}-a_{nj}EX _{nj} I[|a_{nj} X_{nj}| < \varepsilon b_{n}^{\frac{1}{t}}] ) \). Note that
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} E \Bigl\{ b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \vert H_{nj} \vert - \varepsilon \Bigr\} _{+} \\ &\quad= \sum _{n=1}^{\infty } c_{n} \int _{0}^{\infty } P \Bigl(b_{n}^{- \frac{1}{t}} \max_{1\leq j\leq b_{n}} \vert H_{nj} \vert >\varepsilon +u \Bigr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } c_{n} \int _{0}^{\varepsilon }P \Bigl( \max_{1\leq j\leq b_{n}} \vert H_{nj} \vert >(\varepsilon +u)b_{n}^{-\frac{1}{t}} \Bigr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } c_{n} P \Bigl( \max_{1\leq j\leq b_{n}} \vert H _{nj} \vert > 2 \varepsilon b_{n}^{-\frac{1}{t}} \Bigr). \end{aligned}$$
Thus (2.3) is much stronger than (1.3).

Theorem 2.2

Let \(\{X_{nj}, 1\leq j\leq b_{n}, n\geq 1\}\) be an array of rowwise and PNQD random variables with mean zero, and let \(\{a_{nj}, j\geq 1, n\geq 1\}\) be an array of positive numbers. Let \(\{b_{n}, n\geq 1\}\) be a nondecreasing sequence of positive numbers. Assume that, for some sequence \(\{\lambda _{n}, n\geq 1\}\) with \(0<\lambda _{n}\leq 1\), we have \(E|X_{nj}|^{1+\lambda _{n}}<\infty \) for \(1\leq j\leq b_{n}\), \(n\geq 1\). If for some sequence \(\{c_{n}, n \geq 1\}\) of positive real numbers and \(0< t<2\),
$$ \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \bigl(b_{n}^{\frac{1}{t}} \bigr)^{-1- \lambda _{n}} \sum_{j=1}^{b_{n}} E \vert a_{nj} X_{nj} \vert ^{1+\lambda _{n}}< \infty , $$
(2.4)
then for any \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty } c_{n} E \Biggl\{ b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} a_{nj}X_{nj} \Biggr\vert - \varepsilon \Biggr\} _{+}< \infty . $$
(2.5)

Remark 2.2

Noting that the conditions of Theorem 2.2 are the same as in Theorem 3.2 in Ko [1], we have
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n}E \Biggl\{ b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} a_{nj}X_{nj} \Biggr\vert - \varepsilon \Biggr\} _{+} \\ &\quad= \int _{0}^{\infty } P \Biggl(b_{n}^{-\frac{1}{t}} \max_{1\leq k\leq b_{n}} \Biggl\vert \sum_{j=1}^{k} a_{nj}X_{nj} \Biggr\vert > \varepsilon +u \Biggr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } c_{n} \int _{0}^{\varepsilon }P \Biggl( \max_{1\leq k\leq b_{n}} \Biggl\vert \sum_{j=1}^{k} a_{nj}X_{nj} \Biggr\vert >( \varepsilon +u)b_{n}^{-\frac{1}{t}} \Biggr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } c_{n} P \Biggl( \max_{1\leq k\leq b_{n}} \Biggl\vert \sum _{j=1}^{k} a_{nj}X_{nj} \Biggr\vert > 2\varepsilon b_{n}^{- \frac{1}{t}} \Biggr). \end{aligned}$$
Therefore (2.5) is much stronger than (3.8) of Theorem 3.2 in Ko [1]. To sum up, Theorem 2.2 improves Theorem 3.2 in Ko [1].

Corollary 2.3

Let \(\{X_{nj}, 1\leq j\leq b_{n}, n\geq 1\}\) be an array of rowwise PNQD random variables, and let \(\{a_{nj}, j \geq 1, n\geq 1\}\) be an array of positive numbers. Let \(h(x)>0\) be a slowly varying function as \(x\rightarrow \infty \), and let \(\alpha > \frac{1}{2}\) and \(\alpha r\geq 1\). Suppose that, for \(0< t<2\), the following conditions hold for any \(\varepsilon >0\):
$$ \sum_{n=1}^{\infty } n^{\alpha r-2-\frac{1}{t}} (\log _{2} n)^{2} h(n) \sum_{j=1}^{n} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert \geq \varepsilon n^{\frac{1}{t}} \bigr) < \infty $$
(2.6)
and
$$ \sum_{n=1}^{\infty } n^{\alpha r-2-\frac{2}{t}} (\log _{2} n)^{2} h(n) \sum_{j=1}^{n} a_{nj}^{2} E(X_{nj})^{2} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon n^{\frac{1}{t}}\bigr]< \infty . $$
(2.7)
Then, for all \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty } n^{\alpha r-2} h(n) E \Biggl\{ n^{-\frac{1}{t}} \max_{1\leq k \leq n} \Biggl\vert \sum _{j=1}^{k} \bigl(a_{nj}X_{nj}-a_{nj}EX_{nj} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon n^{\frac{1}{t}}\bigr]\bigr) \Biggr\vert - \varepsilon \Biggr\} _{+}< \infty . $$
(2.8)

Theorem 2.4

Let \(\{X_{nj}, j\geq 1, n\geq 1\}\) be an array of rowwise identically distributed PNQD random variables with \(E X_{11}=0\), and let \(h(x)>0\) be a slowly varying function as \(x\rightarrow \infty \). If \(E|X_{11}|^{(\alpha r+2)t} h(|X_{11}|^{t})< \infty \) for \(\alpha >\frac{1}{2}\), \(\alpha r\geq 1\), and \(0< t<2\), then
$$ \sum_{n=1}^{\infty } n^{\alpha r-2} h(n) E \Biggl\{ n^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} X_{nj} \Biggr\vert - \varepsilon \Biggr\} _{+}< \infty . $$
(2.9)

Remark 2.3

Noting that the conditions of Theorem 2.2 are the same as in Theorem 3.4 in Ko [1], we have
$$\begin{aligned} &\sum_{n=1}^{\infty } n^{\alpha r-2} h(n) E \Biggl\{ n^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} X_{nj} \Biggr\vert - \varepsilon \Biggr\} _{+} \\ &\quad= \sum_{n=1}^{\infty } n^{\alpha r-2} h(n) \int _{0}^{\infty } P \Biggl(n^{-\frac{1}{t}} \max _{1\leq k\leq b_{n}} \Biggl\vert \sum_{j=1} ^{k} X_{nj} \Biggr\vert >\varepsilon +u \Biggr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } n^{\alpha r-2} h(n) \int _{0}^{\varepsilon }P \Biggl( \max_{1\leq k\leq b_{n}} \Biggl\vert \sum_{j=1}^{k} X_{nj} \Biggr\vert >(\varepsilon +u)n^{\frac{1}{t}} \Biggr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } n^{\alpha r-2} h(n) P \Biggl( \max_{1\leq k\leq b_{n}} \Biggl\vert \sum _{j=1}^{k} X_{nj} \Biggr\vert > 2 \varepsilon n^{\frac{1}{t}} \Biggr). \end{aligned}$$
Therefore (2.9) is much stronger than (3.13) of Theorem 3.4 in Ko [1]. Theorem 2.4 improves Theorem 3.4 in Ko [1].

Corollary 2.5

Let \(\{X_{nj}, 1\leq j\leq b_{n}, n\geq 1\}\) be an array of rowwise and PNQD random variables with mean zero, and let \(\{a_{nj}, j\geq 1, n\geq 1\}\) be an array of positive numbers. Let \(\{b_{n}, n\geq 1\}\) be a nondecreasing sequence of positive numbers, and let \(\{c_{n}, n\geq 1\}\) be a sequence of positive numbers. Assume that, for all \(\varepsilon >0\),
$$ \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \vert a _{nj} \vert E \vert X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert \geq \varepsilon \log _{2} b_{n}\bigr) < \infty $$
(2.10)
and
$$ \sum_{n=1}^{\infty } c_{n} \sum _{j=1}^{b_{n}} a_{nj}^{2} E(X_{nj})^{2} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon \log _{2} b_{n} \bigr]< \infty . $$
(2.11)
Then, for all \(\varepsilon >0\),
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} E \Biggl\{ (\log _{2} b_{n} )^{-1} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum_{j=1}^{k} \bigl(a_{nj}X_{nj}-a_{nj}EX _{nj} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon {\log _{2} b_{n}}\bigr]\bigr) \Biggr\vert - \varepsilon \Biggr\} _{+} \\ &\quad < \infty . \end{aligned}$$
(2.12)

Corollary 2.6

Let \(\{X_{nj}, 1\leq j\leq b_{n}, n\geq 1\}\) be an array of rowwise and PNQD random variables with mean zero and finite variances. Let \(\{a_{nj}, j\geq 1, n\geq 1\}\) be an array of positive numbers satisfying
$$ \sum_{j=1}^{n} a_{nj}^{2} E(X_{nj})^{2} =O\bigl(n^{\delta }\bigr) \quad\textit{as } n \rightarrow \infty $$
(2.13)
for some \(0<\delta <1\). Then, for all \(\varepsilon >0\) and \(\alpha >0\),
$$ \sum_{n=1}^{\infty } n^{2(\alpha -1)} E \Biggl\{ n^{-\alpha } (\log _{2} n)^{-1} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum_{j=1}^{k} a_{nj} X _{nj} \Biggr\vert - \varepsilon \Biggr\} _{+}< \infty . $$
(2.14)

Remark 2.4

Note that
$$\begin{aligned} &\sum_{n=1}^{\infty } n^{2(\alpha -1)} E \Biggl\{ n^{-\alpha } (\log _{2} n)^{-1} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum_{j=1}^{k} a_{nj} X _{nj} \Biggr\vert - \varepsilon \Biggr\} _{+} \\ &\quad=\sum_{n=1}^{\infty } n^{2(\alpha -1)} \int _{0}^{\infty } P \Biggl(n ^{-\alpha } (\log _{2} n)^{-1} \max_{1\leq k\leq b_{n}} \Biggl\vert \sum _{j=1}^{k} a_{nj}X_{nj} \Biggr\vert >\varepsilon +u \Biggr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } n^{2(\alpha -1)} \int _{0}^{\varepsilon }P \Biggl( n^{-\alpha } (\log _{2} n)^{-1} \max_{1\leq k\leq b_{n}} \Biggl\vert \sum _{j=1}^{k} a_{nj} X_{nj} \Biggr\vert >(\varepsilon +u) n^{ \alpha } \log _{2} n \Biggr)\,du \\ &\quad\geq \sum_{n=1}^{\infty } n^{2(\alpha -1)} P \Biggl( \max_{1\leq k\leq b_{n}} \Biggl\vert \sum _{j=1}^{k} a_{nj} X_{nj} \Biggr\vert > 2\varepsilon n^{\alpha } \log _{2} n \Biggr). \end{aligned}$$
Therefore (2.14) is much stronger than (3.18) of Corollary 3.6 in Ko [1].

3 The proofs

To prove our results, we need some lemmas. The first one is the basic property for PNQD random variables, which can be referred to Lehmann [2].

Lemma 3.1

Let \(\{X_{n}, n\geq 1\}\) be a sequence of PNQD random variables, and let \(\{f_{n}, n\geq 1\}\) be a sequence of nondecreasing functions. Then \(\{f_{n}(X_{n}), n\geq 1\}\) is still a sequence of PNQD random variables.

The next lemma comes from Wu [3] and plays an essential role to prove the result of the paper.

Lemma 3.2

Let \(\{X_{n}, n\geq 1\}\) be a sequence of PNQD random variables with mean zero and finite second moments. Then
$$ E\max_{1\leq k\leq n} \Biggl\vert \sum_{j=1}^{k} X_{j} \Biggr\vert ^{2}\leq C (\log _{2} n )^{2} \sum_{j=1}^{n} E X_{j}^{2}. $$
(3.1)
A positive measurable function \(h(x)\) on \([a, \infty )\) for some \(a>0\) is said to be slowly varying as \(x\rightarrow \infty \) if
$$ \lim_{x\rightarrow \infty }\frac{h(\lambda x)}{h(x)}=1 \quad\textit{for each } \lambda >0. $$
(3.2)

The last lemma can be found in Wu [21].

Lemma 3.3

If \(h(x)>0\) is a slowly varying function as \(x\rightarrow \infty \), then
  1. (i)

    \(\lim_{x\rightarrow \infty } \sup_{2^{k}\leq x<2^{k+1}} h(x)/ h \bigl(2^{k}\bigr) =1\), and

     
  2. (ii)

    \(c_{1} 2^{k} h( \varepsilon 2^{k} ) \leq \sum_{j=1}^{k} 2^{j} h ( \varepsilon 2^{j})\leq c_{2} 2^{k} h ( \varepsilon 2^{k})\)

     
for all \(r>0, \varepsilon >0\), and positive integers k and some positive constants \(c_{1}\) and \(c_{2}\).

Proof of Theorem 2.1

Let \(S_{j}= \sum_{j=1}^{k} (a_{nj}X _{nj}-a_{nj}EX_{nj} I[|a_{nj} X_{nj}| < \varepsilon b_{n}^{ \frac{1}{t}}]) \). For any fixed \(\varepsilon >0\),
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} E \Bigl\{ b_{n}^{-\frac{1}{t}} \max_{1\leq j\leq b_{n}} \vert S_{j} \vert -\varepsilon \Bigr\} _{+} \\ &\quad =\sum _{n=1}^{\infty } c_{n} \int _{0}^{\infty } P\Bigl(b_{n}^{-\frac{1}{t}} \max_{1\leq j\leq b_{n}} \vert S_{j} \vert -\varepsilon >u \Bigr)\,du \\ &\quad\leq \varepsilon \sum_{n=1}^{\infty } c_{n} P\Bigl(\max_{1\leq j\leq b _{n}} \vert S_{j} \vert >\varepsilon b_{n}^{\frac{1}{t}}\Bigr)+ \sum _{n=1}^{\infty } c_{n} \int _{\varepsilon }^{\infty } P\Bigl(\max_{1\leq j\leq b_{n}} \vert S_{j} \vert >u b_{n}^{\frac{1}{t}}\Bigr) \,du \\ &\quad= :I_{1}+I_{2}. \end{aligned}$$
Obviously, we have \(I_{1}<\infty \) by Theorem A. Hence we need only to prove \(I_{2}<\infty \). Clearly,
$$\begin{aligned} &P\Bigl(\max_{1\leq j\leq b_{n}} \vert S_{j} \vert >u b_{n}^{\frac{1}{t}}\Bigr) \\ &\quad=P \Biggl(\max_{1\leq j\leq b_{n}} \vert S_{j} \vert >u b_{n}^{\frac{1}{t}}, \bigcup _{j=1} ^{b_{n}} \bigl\{ \vert a_{nj} X_{nj} \vert \geq u b_{n}^{\frac{1}{t}}\bigr\} \Biggr) \\ &\qquad{}+P \Biggl(\max_{1\leq j\leq b_{n}} \vert S_{j} \vert >u b_{n}^{\frac{1}{t}}, \bigcap_{j=1}^{b_{n}} \bigl\{ \vert a_{nj} X_{nj} \vert < u b_{n}^{\frac{1}{t}}\bigr\} \Biggr) \\ &\quad \leq \sum_{j=1}^{b_{n}} P\bigl( \vert a_{nj} X_{nj} \vert \geq u b_{n}^{ \frac{1}{t}} \bigr) \\ &\qquad + P \Biggl(\max_{1\leq k \leq b_{n}} \Biggl|\sum_{j=1}^{k} \bigl(a_{nj} X _{nj} I\bigl( \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr)- a_{nj}EX _{nj}I\bigl( \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr) \bigr)\Biggr|>u b _{n}^{\frac{1}{t}} \Biggr). \end{aligned}$$
Then we can get
$$\begin{aligned} I_{2} \leq {}& \sum_{n=1}^{\infty } c_{n} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } P \bigl( \vert a_{nj} X_{nj} \vert >u b_{n}^{ \frac{1}{t}} \bigr)\,du \\ &{}+ \sum_{n=1}^{\infty }c_{n} \int _{\varepsilon }^{\infty } P \Biggl( \max_{1\leq k \leq b_{n}} \Biggl|\sum_{j=1}^{k} \bigl(a_{nj} X_{nj} I\bigl( \vert a _{nj} X_{nj} \vert < u b_{n}^{\frac{1}{t}}\bigr)\\ &{}- a_{nj}EX_{nj}I\bigl( \vert a_{nj} X_{nj} \vert < u b_{n}^{\frac{1}{t}} \bigr) \bigr)\Biggr|>u b_{n}^{\frac{1}{t}} \Biggr) \\ =:{}&I_{3}+I_{4}. \end{aligned}$$
Firstly, we will prove that \(I_{3}<\infty \). Noting that
$$ \int _{\varepsilon }^{\infty } P\bigl( \vert a_{nj} X_{nj} \vert \geq u b_{n}^{ \frac{1}{t}}\bigr)\,du\leq b_{n}^{-\frac{1}{t}}E \vert a_{nj}X_{nj} \vert I\bigl( \vert a_{nj} X _{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr), $$
by (2.1) we have
$$\begin{aligned} I_{3} &\leq \sum_{n=1}^{\infty } c_{n} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } P\bigl( \vert a_{nj} X_{nj} \vert \geq u b_{n}^{ \frac{1}{t}}\bigr)\,du \\ &\leq \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{1}{t}} \sum_{j=1}^{b _{n}} E \vert a_{nj}X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert \geq \varepsilon b_{n}^{ \frac{1}{t}} \bigr) < \infty . \end{aligned}$$
To prove that \(I_{4}<\infty \), let
$$\begin{aligned} &Y_{nk}=-u b_{n}^{\frac{1}{t}} I\bigl(a_{nj}X_{nj}< -u b_{n}^{\frac{1}{t}}\bigr) +a _{nj} X_{nj}I\bigl( \vert a_{nj} X_{nj} \vert \leq u b_{n}^{\frac{1}{t}} \bigr) +u b_{n} ^{\frac{1}{t}} I\bigl(a_{nj}X_{nj}>u b_{n}^{\frac{1}{t}}\bigr), \\ &Z_{nk}=-u b_{n}^{\frac{1}{t}} I\bigl(a_{nj}X_{nj}< -u b_{n}^{\frac{1}{ \mu }}\bigr) +u b_{n}^{\frac{1}{t}} I \bigl(a_{nj}X_{nj}>u b_{n}^{\frac{1}{t}}\bigr). \end{aligned}$$
We have
$$\begin{aligned} &P \Biggl(\max_{1\leq k \leq b_{n}} \Biggl|\sum_{j=1}^{k} \bigl(a_{nj} X _{nj} I\bigl( \vert a_{nj} X_{nj} \vert \leq \varepsilon b_{n}^{\frac{1}{t}}\bigr)- a_{nj}EX _{nj}I\bigl( \vert a_{nj} X_{nj} \vert \leq \varepsilon b_{n}^{\frac{1}{t}}\bigr) \bigr)\Biggr|>u b_{n}^{\frac{1}{t}} \Biggr) \\ &\quad =P \Biggl(\max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} (Y_{nk}-EY _{nk}-Z_{nk}+EZ_{nk}) \Biggr\vert >ub_{n}^{\frac{1}{t}} \Biggr). \end{aligned}$$
Then we have
$$\begin{aligned} I_{4} \leq {}& \sum_{n=1}^{\infty } c_{n} \int _{\varepsilon }^{\infty } P \Biggl(\max_{1\leq k \leq b_{n}} \Biggl\vert \sum_{j=1}^{k} (Z_{nk}-EZ _{nk}) \Biggr\vert >\frac{u b_{n}^{\frac{1}{t}}}{2} \Biggr) \,du \\ &{}+ \sum_{n=1}^{\infty } c_{n} \int _{\varepsilon }^{\infty } P \Biggl(\max_{1\leq k \leq b_{n}} \Biggl\vert \sum_{j=1}^{k} (Y_{nk}-EY_{nk}) \Biggr\vert >\frac{u b_{n}^{\frac{1}{t}} }{2} \Biggr)\,du \\ =:{}&I_{5}+I_{6}. \end{aligned}$$
For \(I_{5}\), by the Markov inequality and (2.1) we have
$$\begin{aligned} I_{5} \leq {}& C \sum_{n=1}^{\infty } c_{n} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } u^{-1}E \vert Z_{nk} \vert \,du \\ \leq {}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{\frac{1}{t}} \sum_{j=1} ^{b_{n}} \int _{\varepsilon }^{\infty } P\bigl( \vert a_{nj}X_{nj} \vert >u b_{n}^{ \frac{1}{t}} \bigr) \,du \\ \leq {}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{\frac{1}{t}} \sum_{j=1} ^{b_{n}} b_{n}^{-\frac{1}{t}} E \vert a_{nj}X_{nj} \vert I\bigl( \vert a_{nj}X_{nj} \vert >\varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ \leq{} & C \sum_{n=1}^{\infty } c_{n} \sum_{j=1}^{b_{n}} E \vert a_{nj}X_{nj} \vert I\bigl( \vert a_{nj}X_{nj} \vert >u b_{n}^{\frac{1}{t}}\bigr)< \infty . \end{aligned}$$
Now consider \(I_{6}\). By the Markov inequality and Lemma 3.2 we have
$$\begin{aligned} I_{6} \leq{} & C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} \int _{\varepsilon }^{\infty } u^{-2} E \Biggl( \max _{1\leq k \leq b _{n}} \Biggl\vert \sum_{j=1}^{k} (Y_{nk}-EY_{nk}) \Biggr\vert \Biggr)^{2} \,du \\ \leq {}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } u^{-2} E \vert Y _{nk} \vert ^{2} \,du \\ ={}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } u^{-2} \vert a _{nj} \vert ^{2} E \vert X_{nj} \vert ^{2} I\bigl( \vert a_{nj}X_{nj} \vert \leq u b_{n}^{\frac{1}{t}}\bigr) \\ &{}+ C \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum_{j=1}^{b _{n}} \int _{\varepsilon }^{\infty } P\bigl( \vert a_{nj}X_{nj} \vert > u b_{n}^{ \frac{1}{t}} \bigr) \,du \\ ={}& I_{7}+I_{8}. \end{aligned}$$
Firstly, we will prove that \(I_{8}<\infty \). By (2.1) we have
$$\begin{aligned} I_{8} &\leq C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{1}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl( \vert a_{nj}X_{nj} \vert > \varepsilon b_{n}^{\frac{1}{t}} \bigr) \\ & < \infty . \end{aligned}$$
Next, consider \(I_{7}<\infty \). We have
$$\begin{aligned} I_{7} ={}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } u^{-2} \vert a _{nj} \vert ^{2} E \vert X_{nj} \vert ^{2} I\bigl( \vert a_{nj}X_{nj} \vert \leq \varepsilon b_{n}^{ \frac{1}{t}}\bigr) \\ &{}+C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \int _{\varepsilon }^{\infty } u^{-2} \vert a _{nj} \vert ^{2} E \vert X_{nj} \vert ^{2} I\bigl(\varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X _{nj} \vert \leq u b_{n}^{\frac{1}{t}} \bigr) \\ =:{}&I_{7}'+I_{7}''. \end{aligned}$$
By (2.2) it is easy to see that
$$\begin{aligned} I_{7}' &\leq C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \vert a_{nj} \vert ^{2} E \vert X_{nj} \vert ^{2} I\bigl( \vert a _{nj}X_{nj} \vert \leq \varepsilon b_{n}^{\frac{1}{t}}\bigr) \int _{\varepsilon } ^{\infty } u^{-2}\,du \\ &< \infty . \end{aligned}$$
By the Markov inequality and (2.1) we have
$$\begin{aligned} I_{7}''={}&C \sum _{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum _{j=1}^{b_{n}} \sum_{m=1}^{\infty } \int _{m\varepsilon }^{(m+1)\varepsilon } u^{-2} E \vert a_{nj} X_{nj} \vert ^{2} I\bigl(\varepsilon b_{n} ^{\frac{1}{t}}< \vert a_{nj}X_{nj} \vert \leq u b_{n}^{\frac{1}{t}}\bigr)\,du \\ \leq{}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} \sum _{m=1}^{\infty } m^{-2} E \vert a_{nj} X _{nj} \vert ^{2} I\bigl(\varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X_{nj} \vert \leq (m+1) \varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ ={}&C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \sum _{m=1}^{\infty } m^{-2} \sum _{s=1} ^{m} E \vert a_{nj} X_{nj} \vert ^{2} I\bigl(s\varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X _{nj} \vert \leq (s+1)\varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ ={}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \sum _{s=1}^{\infty } E \vert a_{nj} X_{nj} \vert ^{2} I\bigl(s\varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X_{nj} \vert \leq (s+1)\varepsilon b_{n}^{\frac{1}{t}}\bigr) \sum_{m=s}^{\infty } m^{-2} \\ \leq{}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} \sum _{s=1}^{\infty } s^{-1} E \vert a_{nj} X _{nj} \vert ^{2} I\bigl(s\varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X_{nj} \vert \leq (s+1) \varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ ={}&C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \sum _{s=1}^{\infty } s^{-1} (s+1)^{2} \varepsilon ^{2} b_{n}^{\frac{2}{t}}\\ &{}\times E \biggl\vert \frac{a_{nj} X_{nj}}{(s+1) \varepsilon b_{n}^{\frac{1}{t}}} \biggr\vert ^{2} I\bigl(s\varepsilon b_{n}^{ \frac{1}{t}}< \vert a_{nj}X_{nj} \vert \leq (s+1)\varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ \leq{}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} \sum _{s=1}^{\infty } s^{-1} (s+1)^{2} b _{n}^{\frac{2}{t}} \\ &{}\times E \biggl\vert \frac{a_{nj} X_{nj}}{(s+1)\varepsilon b _{n}^{\frac{1}{t}}} \biggr\vert I \bigl(s\varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X _{nj} \vert \leq (s+1)\varepsilon b_{n}^{\frac{1}{t}} \bigr) \\ ={}&C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \sum _{s=1}^{\infty } s^{-1} (s+1) b_{n} ^{\frac{1}{t}} E \vert a_{nj} X_{nj} \vert I\bigl(s \varepsilon b_{n}^{\frac{1}{t}}< \vert a _{nj}X_{nj} \vert \leq (s+1)\varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ \leq{}& C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} \sum _{s=1}^{\infty } E \vert a_{nj} X_{nj} \vert I\bigl(s \varepsilon b_{n}^{\frac{1}{t}}< \vert a_{nj}X_{nj} \vert \leq (s+1)\varepsilon b _{n}^{\frac{1}{t}}\bigr) \\ ={}&C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} b_{n}^{\frac{1}{t}} E \vert a_{nj} X_{nj} \vert I\bigl( \vert a _{nj} X_{nj} \vert >\varepsilon b_{n}^{\frac{1}{t}} \bigr) \\ ={}&C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{1}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{n} E \vert a_{nj} X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert >\varepsilon b_{n}^{\frac{1}{t}}\bigr) < \infty . \end{aligned}$$
This completes the proof of the theorem. □

Proof of Theorem 2.2

We estimate
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{1}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ &\quad= \varepsilon \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum _{j=1}^{b_{n}} E \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{ \frac{1}{t}} } \biggr\vert I \biggl( \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert \geq 1 \biggr) \\ &\quad\leq \varepsilon \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum _{j=1}^{b_{n}}E \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{ \frac{1}{t}} } \biggr\vert ^{1+\lambda _{n}} I \biggl( \biggl\vert \frac{a_{nj} X _{nj}}{\varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert \geq 1 \biggr) \\ &\quad\leq C \sum_{n=1}^{\infty } c_{n} ( \log _{2} b_{n})^{2} \bigl(b_{n}^{ \frac{1}{t}} \bigr)^{-1-\lambda _{n}} \sum_{j=1}^{b_{n}} E \vert a_{nj} X_{nj} \vert ^{1+ \lambda _{n}}< \infty \end{aligned}$$
and
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} a_{nj}^{2} E(X_{nj})^{2} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr] \\ &\quad= \varepsilon ^{2} \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} E \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert ^{2} I \biggl( \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{ \frac{1}{t}} } \biggr\vert < 1 \biggr) \\ &\quad\leq C\sum_{n=1}^{\infty } c_{n} ( \log _{2} b_{n})^{2} \sum _{j=1} ^{b_{n}}E \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert ^{{1+\lambda _{n}}} I \biggl( \biggl\vert \frac{a_{nj} X_{nj}}{ \varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert < 1 \biggr) \\ &\quad\leq C \sum_{n=1}^{\infty } c_{n} ( \log _{2} b_{n})^{2} \bigl(b_{n}^{ \frac{1}{t}} \bigr)^{-1-\lambda _{n}} \sum_{j=1}^{b_{n}} E \vert a_{nj} X_{nj} \vert ^{1+ \lambda _{n}}< \infty . \end{aligned}$$
Hence conditions (2.1) and (2.2) of Theorem 2.1 are satisfied. Since \(EX_{nj}=0\), we get
$$\begin{aligned} &b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} Ea_{nj}X_{nj} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr] \Biggr\vert \\ &\quad\leq b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \sum _{j=1}^{k} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl[ \vert a_{nj} X_{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr] \\ &\quad\leq \bigl(b_{n}^{-\frac{1}{t}}\bigr)^{-1-\lambda _{n}} \sum _{j=1}^{b_{n}} \vert a _{nj} \vert ^{1+\lambda _{n}} E \vert X_{nj} \vert ^{1+\lambda _{n}}\rightarrow 0 \quad\mbox{as } n\rightarrow \infty , \end{aligned}$$
and thus (2.5) is completed. □

Proof of Corollary 2.3

Let \(c_{n}= n^{\alpha r-2} h(n) \) and \(b_{n}=n\). Then, by Theorem 2.1, (2.8) is completed. □

Proof of Theorem 2.4

For \(a_{nj}=1\), \(j\geq 1\), \(n\geq 1\), by Lemma 3.3 we have
$$\begin{aligned} &\sum_{n=1}^{\infty } n^{\alpha r-1} (\log _{2} n)^{2} h(n) E \vert X_{11} \vert I\bigl( \vert X_{11} \vert \geq \varepsilon n^{\frac{1}{t}}\bigr) \\ &\quad\leq C\sum_{n=1}^{\infty } \bigl(2^{k}\bigr)^{\alpha r} k^{2} h\bigl(2^{k} \bigr) E \vert X _{11} \vert I\bigl( \vert X_{11} \vert \geq \varepsilon \bigl(2^{k}\bigr)^{\frac{1}{t}}\bigr) \\ &\quad\leq C\sum_{k=1}^{\infty } \bigl(2^{k}\bigr)^{\alpha r+2} h\bigl(2^{k}\bigr) E \vert X_{11} \vert I\bigl( \vert X_{11} \vert \geq \varepsilon \bigl(2^{k}\bigr)^{\frac{1}{t}}\bigr) \\ &\quad\leq C\sum_{m=1}^{\infty } E \vert X_{11} \vert I\bigl[\varepsilon \bigl(2^{m} \bigr)^{ \frac{1}{t}} \leq \vert X_{11} \vert < \bigl(2^{m+1} \bigr)^{\frac{1}{t}}\bigr] \sum_{j=1}^{m} \bigl(2^{j}\bigr)^{ \alpha r+2} h\bigl(2^{j}\bigr) \\ &\quad \leq C\sum_{m=1}^{\infty } \bigl(2^{m}\bigr)^{\alpha r+2} h\bigl(2^{m}\bigr) E \vert X_{11} \vert I\bigl[\varepsilon \bigl(2^{m} \bigr)^{\frac{1}{t}} \leq \vert X_{11} \vert < \varepsilon \bigl(2^{m+1}\bigr)^{ \frac{1}{t}}\bigr] \\ &\quad\leq E \vert X_{11} \vert ^{(\alpha r+2)t} h\bigl( \vert X_{11} \vert \bigr)< \infty \end{aligned}$$
and
$$\begin{aligned} &\sum_{n=1}^{\infty } n^{\alpha r-1-\frac{2}{t}} (\log _{2} n)^{2} h(n) \sum_{j=1}^{n} a_{nj}^{2} E(X_{11})^{2} I\bigl[ \vert X_{11} \vert < \varepsilon n ^{\frac{1}{t}}\bigr] \\ &\quad\leq \sum_{n=1}^{\infty } \bigl(2^{k}\bigr)^{\alpha r-\frac{2}{t}} k^{2} h\bigl(2^{k} \bigr) \int _{0} ^{(2^{k})^{\frac{1}{t}}} x^{2}\,dF(x) \\ &\quad\leq \sum_{k=1}^{\infty } \bigl(2^{k}\bigr)^{\alpha r+2-\frac{2}{t}} h\bigl(2^{k}\bigr) \int _{0} ^{(2^{k})^{\frac{1}{t}}} x^{2}\,dF(x) \\ &\quad\leq \sum_{m=1}^{\infty } \bigl(2^{m}\bigr)^{\alpha r+2-\frac{2}{t}} h\bigl(2^{m}\bigr) \int _{(2^{m-1})^{\frac{1}{t}}} ^{(2^{m})^{\frac{1}{t}}} x^{2}\,dF(x) \\ &\quad= C\sum_{m=1}^{\infty } \bigl(2^{m} \bigr)^{\alpha r+2-\frac{2}{t}} \int _{(2^{m-1})^{\frac{1}{t}}} ^{(2^{m})^{\frac{1}{t}}} \frac{h(2 \times 2^{m-1})}{h( \vert x \vert ^{t})} h\bigl( \vert x \vert ^{t}\bigr) x^{2}\,dF(x) \\ &\quad\leq C\sum_{m=1}^{\infty } \bigl(2^{m}\bigr)^{\alpha r+2-\frac{2}{t}} \int _{(2^{m-1})^{\frac{1}{t}}} ^{(2^{m})^{\frac{1}{t}}} h\bigl( \vert x \vert ^{t}\bigr) x ^{2}\,dF(x) \\ &\quad\leq C\sum_{m=1}^{\infty } \int _{(2^{m-1})^{\frac{1}{t}}} ^{(2^{m})^{ \frac{1}{t}}} \bigl( \vert x \vert ^{t}\bigr)^{\alpha r+2} h\bigl( \vert x \vert ^{t} \bigr) x^{2}\,dF(x) \\ &\quad = C E \vert X_{11} \vert ^{(\alpha r+2)t} h\bigl( \vert X_{11} \vert ^{t}\bigr)< \infty , \end{aligned}$$
and thus (2.6) and (2.7) are satisfied. Then, to complete the proof, it remains to show that, for \(1\leq j\leq n\), \(n^{-\frac{1}{t}} j |EX _{11} I[|X_{11}| < \varepsilon n^{\frac{1}{t}}] | \rightarrow 0\) as \(n \rightarrow \infty \).
If \((\alpha r+2)t <1\), then we have, as \(n \rightarrow \infty \),
$$ n^{-\frac{1}{t}} j \bigl\vert EX_{11} I\bigl[ \vert X_{11} \vert < \varepsilon n^{\frac{1}{t}}\bigr] \bigr\vert \leq ( \varepsilon )^{1-(\alpha r+2)t} n^{1-(\alpha r+2)t} E \vert X_{11} \vert ^{( \alpha r+2)t}\rightarrow 0, $$
and if \((\alpha r+2)t \geq 1\), then since \(|EX_{11}|=0\), we have, as \(n \rightarrow \infty \),
$$\begin{aligned} n^{-\frac{1}{t}} j \bigl\vert EX_{11} I\bigl[ \vert X_{11} \vert < \varepsilon n^{\frac{1}{t}}\bigr] \bigr\vert &\leq n^{1-\frac{1}{t}} \bigl\vert -EX_{11} I\bigl[ \vert X_{11} \vert \geq \varepsilon n^{ \frac{1}{t}}\bigr] \bigr\vert \\ &\leq (\varepsilon )^{1-(\alpha r+2)t} n^{1-(\alpha r+2)t} E \vert X_{11} \vert ^{( \alpha r+2)t}\rightarrow 0. \end{aligned}$$
Hence the proof of Theorem 2.4 is completed. □

Proof of Corollary 2.5

Taking \(\log _{2} b_{n}\) instead of \(b_{n}^{\frac{1}{t}}\) in Theorem 2.1, we get (2.12). □

Proof of Corollary 2.6

In Theorem 2.1, let \(c_{n}= n^{2( \alpha -1)}\) and \(b_{n}^{-\frac{1}{t}}=n^{-\alpha } (\log _{2} n)^{-1}\). By (2.13) we have
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{1}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl( \vert a_{nj} X_{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr) \\ &\quad =\varepsilon \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum _{j=1}^{b_{n}} E \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{ \frac{1}{t}} } \biggr\vert I \biggl( \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert \geq 1 \biggr) \\ &\quad \leq \varepsilon \sum_{n=1}^{\infty } c_{n} (\log _{2} b_{n})^{2} \sum _{j=1}^{b_{n}}E \biggl\vert \frac{a_{nj} X_{nj}}{\varepsilon b_{n}^{ \frac{1}{t}} } \biggr\vert ^{2} I \biggl( \biggl\vert \frac{a_{nj} X_{nj}}{ \varepsilon b_{n}^{\frac{1}{t}} } \biggr\vert \geq 1 \biggr) \\ &\quad \leq C \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} E \vert a_{nj} X_{nj} \vert ^{2} \\ &\quad \leq C \sum_{n=1}^{\infty } n^{2(\alpha -1)} n^{-2\alpha } (\log _{2} {n})^{-2} (\log _{2} b_{n})^{2} n^{\delta } \\ &\quad \leq C \sum_{n=1}^{\infty } n^{-2+\delta } < \infty \end{aligned}$$
and
$$\begin{aligned} &\sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b_{n})^{2} \sum_{j=1}^{b_{n}} a_{nj}^{2} E(X_{nj})^{2} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr] \\ &\quad \leq \sum_{n=1}^{\infty } c_{n} b_{n}^{-\frac{2}{t}} (\log _{2} b _{n})^{2} \sum_{j=1}^{b_{n}} E \vert {a_{nj} X_{nj}} \vert ^{2} \\ &\quad\leq C\sum_{n=1}^{\infty } n^{2(\alpha -1)} n^{-2\alpha } (\log _{2} {n})^{-2} (\log _{2} b_{n})^{2} n^{\delta } \\ &\quad \leq C \sum_{n=1}^{\infty } n^{-2+\delta }< \infty . \end{aligned}$$
Hence conditions of (2.1) and (2.2) of Theorem 2.1 are satisfied. Since \(EX _{nj}=0\), by (2.13) we get
$$\begin{aligned} &b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \Biggl\vert \sum _{j=1}^{k} Ea_{nj}X_{nj} I\bigl[ \vert a_{nj} X_{nj} \vert < \varepsilon b_{n}^{\frac{1}{t}}\bigr] \Biggr\vert \\ &\quad\leq b_{n}^{-\frac{1}{t}} \max_{1\leq k \leq b_{n}} \sum _{j=1}^{k} \vert a_{nj} \vert E \vert X_{nj} \vert I\bigl[ \vert a_{nj} X_{nj} \vert \geq \varepsilon b_{n}^{\frac{1}{t}}\bigr] \\ &\quad\leq b_{n}^{-\frac{2}{t}} \sum_{j=1}^{b_{n}} \vert a_{nj} \vert ^{2} E \vert X_{nj} \vert ^{2} \\ &\quad\leq C n^{-2\alpha +\delta } (\log _{2} {n})^{-2} \rightarrow 0 \quad\text{as } n\rightarrow \infty , \end{aligned}$$
and thus (2.14) is completed. □

Declarations

Acknowledgements

The authors are grateful to the referee for carefully reading the manuscript and for providing some comments and suggestions, which led to improvements in this paper.

Funding

The research of M. Ge is partially supported by the NSF of Anhui Educational Committe (KJ2017B11, KJ2018A0428). The research of Z. Dai is partially supported by the NSF of Anhui Educational Committe (KJ2017B09). The research of Y. Wu is partially supported by the Natural Science Foundation of Anhui Province (1708085MA04), the Key Program in the Young Talent Support Plan in Universities of Anhui Province (gxyqZD2016316), and Chuzhou University scientific research fund (2017qd17).

Authors’ contributions

All authors contributed equally and read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Finance, Chuzhou University, Chuzhou, China
(2)
College of Mathematics and Computer Science, Tongling University, Tongling, China

References

  1. Ko, M.: Complete convergences for arrays of row-wise PNQD random variables. Stoch. Int. J. Probab. Stoch. Process. 85, 172–180 (2013) MathSciNetView ArticleGoogle Scholar
  2. Lehmann, E.L.: Some concepts of dependence. Ann. Math. Stat. 37, 1137–1153 (1996) MathSciNetView ArticleGoogle Scholar
  3. Wu, Q.Y.: Convergence properties of pairwise NQD random sequence. Acta Math. Sin. 45, 617–624 (2002) (in Chinese) MathSciNetMATHGoogle Scholar
  4. Matula, P.: A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 15, 209–213 (1992) MathSciNetView ArticleGoogle Scholar
  5. Jabbari, H.: On almost sure convergence for weighted sums of pairwise negatively quadrant dependent random variables. Stat. Pap. 54, 765–772 (2013) MathSciNetView ArticleGoogle Scholar
  6. Li, R., Yang, W.G.: Strong convergence of pairwise NQD random sequences. J. Math. Anal. Appl. 334, 741–747 (2008) MathSciNetView ArticleGoogle Scholar
  7. Wu, Y.F.: Strong convergence for weighted sums of arrays of row-wise pairwise NQD random variables. Collect. Math. 65, 119–130 (2014) MathSciNetView ArticleGoogle Scholar
  8. Xu, H., Tang, L.: Some convergence properties for weighted sums of pairwise NQD sequences. J. Inequal. Appl. 2012, 255 (2012) MathSciNetView ArticleGoogle Scholar
  9. Gan, S.X., Chen, P.Y.: Some limit theorems for sequences of pairwise NQD random variables. Acta Math. Sci. 28, 269–281 (2008) MathSciNetView ArticleGoogle Scholar
  10. Wu, Y.F., Guan, M.: Mean convergence theorems and weak laws of large numbers for weighted sums of dependent random variables. J. Math. Anal. Appl. 377, 613–623 (2011) MathSciNetView ArticleGoogle Scholar
  11. Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25–31 (1947) MathSciNetView ArticleGoogle Scholar
  12. Chow, Y.S.: On the rate of moment convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16(3), 177–201 (1988) MathSciNetMATHGoogle Scholar
  13. Baum, L.E., Katz, M.: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120, 108–123 (1965) MathSciNetView ArticleGoogle Scholar
  14. Sung, S.H.: Moment inequalities and complete moment convergence. J. Inequal. Appl. 2009, Article ID 271265 (2009) MathSciNetView ArticleGoogle Scholar
  15. Wang, X.J., Hu, S.H.: Complete convergence and complete moment convergence for martingale difference sequence. Acta Math. Sin. 30, 119–132 (2014) MathSciNetView ArticleGoogle Scholar
  16. Shen, A.T., Xue, M.X., Volodin, A.: Complete moment convergence for arrays of rowwise NSD random variables. Stoch. Int. J. Probab. Stoch. Process. 88(4), 606–621 (2016) MathSciNetView ArticleGoogle Scholar
  17. Wu, Y.F., Cabrea, M.O., Volodin, A.: Complete convergence and complete moment convergence for arrays of rowwise END random variables. Glas. Mat. 49(69), 449–468 (2014) MathSciNetGoogle Scholar
  18. Wu, Y., Wang, X.J., Hu, S.H.: Complete moment convergence for weighted sums of weakly dependent random variables and its application in nonparametric regression model. Stat. Probab. Lett. 127, 56–66 (2017) MathSciNetView ArticleGoogle Scholar
  19. Wang, X.J., Wu, Y., Hu, S.H.: Complete moment convergence for double indexed randomly weighted sums and its applications. Statistics 52, 503–518 (2018) MathSciNetView ArticleGoogle Scholar
  20. Wu, Y., Wang, X.J.: Equivalent conditions of complete moment and integral convergence for a class of dependent random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 112, 575–592 (2018) MathSciNetView ArticleGoogle Scholar
  21. Wu, Q.Y.: Limit Theorems of Probability Theory for Mixing Sequences. Science Press, Beijing (2006) Google Scholar

Copyright

© The Author(s) 2019

Advertisement