Skip to main content

Complete convergence and complete moment convergence for weighted sums of m-NA random variables

Abstract

The authors study the complete convergence and the complete moment convergence for weighted sums of m-negatively associated (m-NA) random variables and obtain some new results. These results extend and improve the corresponding theorems of Sung (Stat. Pap. 52:447-454, 2011). In addition, we point out that an open problem presented in Sung (Stat. Pap. 54:773-781, 2013) can be solved by means of the method used in this paper.

1 Introduction

Let \(\{X, X_{n}, n\geq1\}\) be a sequence of random variables and \(\{a_{ni}, 1\leq i\leq n, n\geq1\}\) be an array of constants. Because the weighted sums \(\sum_{i=1}^{n}a_{ni}X_{i}\) play important roles in some useful linear statistics, many authors studied the strong convergence for the weighted sums. We refer the reader to Cuzick [1], Wu [2], Bai and Cheng [3], Sung [4], Chen and Gan [5], Cai [6], Wu [7], Zarei and Jabbari [8], Sung [9], Sung [10], Shen [11], Chen and Sung [12].

The concept of the complete convergence was introduced by Hsu and Robbins [13]. A sequence of random variables \(\{U_{n}, n\geq1\}\) is said to converge completely to a constant θ if

$$\sum_{n=1}^{\infty}P \bigl(|U_{n}-\theta|> \varepsilon \bigr)< \infty \quad \mbox{for all } \varepsilon>0. $$

Chow [14] presented the following more general concept of the complete moment convergence. Let \(\{Z_{n}, n \geq1\}\) be a sequence of random variables and \(a_{n}>0\), \(b_{n}>0\), \(q>0\). If

$$\sum_{n=1}^{\infty}a_{n}E\bigl\{ b_{n}^{-1}|Z_{n}|-\varepsilon\bigr\} _{+}^{q}< \infty \quad\mbox{for some or all } \varepsilon>0, $$

then the above result was called the complete moment convergence.

The following concept was introduced by Joag-Dev and Proschan [15].

Definition 1.1

A finite family of random variables \(\{ X_{k}, 1\leq k\leq n\}\) is said to be negatively associated (abbreviated to NA) if for any disjoint subsets A and B of \(\{1, 2,\ldots, n\}\) and any real coordinate-wise nondecreasing functions f on \(R^{A}\) and g on \(R^{B}\),

$$\operatorname{Cov}\bigl(f(X_{i}, i \in A), g(Y_{j}, j\in B) \bigr)\leq0 $$

whenever the covariance exists. An infinite family of random variables is NA if every finite subfamily is NA.

Definition 1.2

Let \(m\geq1\) be a fixed integer. A sequence of random variables \(\{X_{n}, n\geq1\}\) is said to be m-negatively associated (abbreviated to m-NA) if for any \(n\geq2\) and any \(i_{1},\ldots,i_{n}\) such that \(|i_{k}-i_{j}|\geq m\) for all \(1\leq k\neq j\leq n\), \(X_{i_{1}},\ldots,X_{i_{n}}\) are NA.

The concept of m-NA random variables was introduced by Hu et al. [16]. It is easily seen that this concept is a natural extension from NA random variables (wherein \(m=1\)).

It is well known that the properties of NA random variables have been applied to the reliability theory, multivariate statistical analysis and percolation theory. Sequences of NA random variables have been an attractive research topic in the recent literature. For example, Matula [17], Su et al. [18], Shao [19], Gan and Chen [20], Fu and Zhang [21], Baek et al. [22], Chen et al. [23], Cai [6], Xing [24], Sung [10], Qin and Li [25], Wu [26]. Since NA implies m-NA, it is very significant to study the convergence properties of this wider m-NA class. However, to the best of our knowledge, besides Hu et al. [16] and Hu et al. [27], few authors discuss the convergence properties for sequences of m-NA random variables.

Cai [6] studied the complete convergence for weighted sums of identically distributed NA random variables. He obtained the following theorem.

Theorem A

Let \(\{X, X_{n}, n\geq1\}\) be a sequence of identically distributed NA random variables, and let \(\{a_{ni}, 1\leq i\leq n, n\geq1\}\) be an array of constants satisfying

$$ A_{\alpha}=\lim_{n\rightarrow\infty}\sup A_{\alpha, n}< \infty,\qquad A_{\alpha, n}^{\alpha}=\sum_{i=1}^{n}|a_{ni}|^{\alpha}/n $$
(1.1)

for some \(0<\alpha\leq2\). Let \(b_{n}=n^{1/\alpha}(\log n)^{1/\gamma}\) for some \(\gamma>0\). Furthermore, suppose that \(EX=0\) when \(1<\alpha\leq2\). If \(E\exp(h|X|^{\gamma})<\infty\) for some \(h>0\), then

$$ \sum_{n=1}^{\infty}n^{-1}P \Biggl(\max _{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>b_{n}\varepsilon \Biggr)< \infty \quad \textit{for all } \varepsilon>0. $$
(1.2)

Sung [10] improved Theorem A by replacing some much weaker moment conditions.

Theorem B

Let \(\{X, X_{n}, n\geq1\}\) be a sequence of identically distributed NA random variables, and let \(\{a_{ni}, 1\leq i\leq n, n\geq1\}\) be an array of constants satisfying (1.1) for some \(0<\alpha\leq2\). Let \(b_{n}=n^{1/\alpha}(\log n)^{1/\gamma}\) for some \(\gamma>0\). Furthermore, suppose that \(EX=0\) when \(1<\alpha\leq2\). Then the following statements hold:

  1. (i)

    If \(\alpha>\gamma\), then \(E|X|^{\alpha}<\infty\) implies (1.2).

  2. (ii)

    If \(\alpha=\gamma\), then \(E|X|^{\alpha}\log|X|<\infty\) implies (1.2).

  3. (iii)

    If \(\alpha<\gamma\), then \(E|X|^{\gamma}<\infty\) implies (1.2).

The main purpose of this article is to discuss the complete convergence and the complete moment convergence for weighted sums of m-NA random variables. We shall extend Theorem B to m-NA random variables. In addition, we shall extend and improve Theorem B by obtaining a much stronger conclusion under the same conditions (see Remark 3.2).

It is worthy to point out that the open problem presented in Sung [9], see Remark 2.2, can be solved by means of the method used in this article (see Remark 3.4).

Throughout this paper, the symbol C represents positive constants whose values may change from one place to another. For a finite set A the symbol \(\sharp(A)\) denotes the number of elements in the set A.

2 Preliminaries

We first recall the following concept of stochastic domination, which is a slight generalization of identical distribution. An sequence of random variables \(\{X_{n}, n\geq1\}\) is said to be stochastically dominated by a random variable X (write \(\{X_{n}\} \prec X\)) if there exists a constant \(C>0\) such that

$$\sup_{n\geq1}P \bigl(|X_{n}|>x \bigr)\leq CP \bigl(|X|>x \bigr), \quad \forall x>0. $$

The following exponential inequality for m-NA random variables can be proved by means of Theorem 3 in Shao [19] and the proof of Lemma 2 in Hu et al. [27]. Here we omit the details.

Lemma 2.1

Let \(\{X_{n},n\geq1\}\) be a sequence of m-NA random variables with zero means and finite second moments. Let \(S_{j}=\sum_{k=1}^{j}X_{k}\) and \(B_{n}=\sum_{k=1}^{n}EX_{k}^{2}\). Then for all \(n\geq m\), \(x>0\) and \(a>0\),

$$\begin{aligned} P \Bigl(\max_{1\leq j\leq n}|S_{j}|\geq x \Bigr) \leq{}& 2mP \Bigl(\max_{1\leq j\leq n}|X_{j}|>a \Bigr) \\ &{}+ 4m\exp \biggl\{ -\frac{x^{2}}{8m^{2}B_{n}} \biggr\} +4m \biggl\{ \frac{mB_{n}}{4(xa+mB_{n})} \biggr\} ^{x/(12ma)}. \end{aligned}$$
(2.1)

Remark 2.1

Since \({\mathrm{e}}^{-x}\leq(1+x)^{-1}\) for \(x>0\), we get, for \(x>0\) and \(a>0\),

$$\exp \biggl\{ -\frac{x^{2}}{8m^{2}B_{n}} \biggr\} = \exp \biggl\{ -\frac{3xa}{2mB_{n}} \biggr\} ^{x/(12ma)} \leq \biggl(1+\frac{3xa}{2mB_{n}} \biggr)^{-x/(12ma)}. $$

Noting that

$$\biggl\{ \frac{mB_{n}}{4(xa+mB_{n})} \biggr\} ^{x/(12ma)} = \biggl(1+\frac {4xa}{mB_{n}} \biggr)^{-x/(12ma)} \leq \biggl(1+\frac{3xa}{2mB_{n}} \biggr)^{-x/(12ma)}. $$

Therefore, it follows by (2.1) that

$$ P\Bigl(\max_{1\leq j\leq n}|S_{j}|\geq x\Bigr) \leq 2m\sum _{k=1}^{n}P\bigl(|X_{k}|>a\bigr)+8m \biggl(1+\frac{3xa}{2mB_{n}} \biggr)^{-x/(12ma)}. $$
(2.2)

Now we present a Rosenthal-type inequality for maximum partial sums of m-NA random variables, which is the crucial tool in the proof of our main results.

Lemma 2.2

Let \(\{X_{n},n\geq1\}\) be a sequence of m-NA random variables with mean zero and \(E|X_{k}|^{q}<\infty\) for every \(1\leq k\leq n\). Let \(S_{j}=\sum_{k=1}^{j}X_{k}\), \(1\leq j\leq n\). Then for \(q\geq2\), there exists a positive constant C depending only on q such that

$$ E\max_{1\leq j\leq n}|S_{j}|^{q} \leq C \Biggl\{ \sum _{k=1}^{n}E|X_{k}|^{q}+ \Biggl(\sum_{k=1}^{n}EX_{k}^{2} \Biggr)^{q/2} \Biggr\} . $$
(2.3)

Proof

Let \(B_{n}=\sum_{k=1}^{n}EX_{k}^{2}\). Noting that

$$ E|Y|^{q}=q\int_{0}^{\infty} P \bigl(|Y|\geq x \bigr)x^{q-1}\,{\mathrm{d}}x \qquad\bigl(E|Y|^{q}< \infty \bigr). $$
(2.4)

By taking \(a=x/(12mq)\) in (2.2), we have

$$\begin{aligned} E\max_{1\leq j\leq n}|S_{j}|^{q} =&q\int _{0}^{\infty} P\Bigl(\max_{1\leq j\leq n}|S_{j}| \geq x\Bigr)x^{q-1}\,{\mathrm{d}}x \\ \leq&2mq\sum_{k=1}^{n}\int _{0}^{\infty}P\bigl(|X_{k}|\geq x/(12mq) \bigr)x^{q-1}\,{\mathrm {d}}x \\ &{}+8mq\int_{0}^{\infty} \biggl(1+\frac{x^{2}}{8m^{2}qB_{n}} \biggr)^{-q}x^{q-1}\,{\mathrm{d}}x \\ = :&A+B. \end{aligned}$$

By (2.4), we have \(A=2^{2q+1}3^{q}m^{q+1}q^{q}\sum_{k=1}^{n}E|X_{k}|^{q}\). Letting \(t=x^{2}/(8m^{2}qB_{n})\), then

$$\begin{aligned} B =&2^{2+3q/2}m^{1+q}q^{1+q/2}(B_{n})^{q/2} \int_{0}^{\infty} (1+t)^{-q} t^{ q/2-1} \,{\mathrm{d}}t \\ =&2^{2+3q/2}m^{1+q}q^{1+q/2} B(q/2, q/2) \Biggl(\sum _{k=1}^{n}EX_{k}^{2} \Biggr)^{q/2}, \end{aligned}$$

where

$$B(\alpha, \beta)=\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-1} \,{\mathrm{d}}t =\int_{0}^{\infty}t^{\alpha-1}(1+t)^{-(\alpha+\beta)} \,{\mathrm{d}}t. $$

Letting \(C=\max\{2^{2q+1}3^{q}m^{q+1}q^{q}, 2^{2+3q/2}m^{1+q}q^{1+q/2} B(q/2, q/2)\}\), we can get (2.3). The proof is complete. □

Lemma 2.3

(Wang et al. [28])

Let \(\{X_{n}, n\geq1\}\) be a sequence of random variables with \(\{X_{n}\}\prec X\). Then there exists a constant C such that, for all \(q>0\) and \(x>0\),

  1. (i)

    \(E|X_{k}|^{q}I(|X_{k}|\leq x)\leq C\{E|X|^{q}I(|X|\leq x)+x^{q}P(|X|>x)\}\),

  2. (ii)

    \(E|X_{k}|^{q}I(|X_{k}|>x)\leq CE|X|^{q}I(|X|> x)\).

The following lemma is very important in the proof of our result, which improves Lemma 2.2 and Lemma 2.3 of Sung [10].

Lemma 2.4

Let \(\{a_{ni}, 1\leq i\leq n, n\geq1\}\) be an array of constants satisfying \(\sum_{i=1}^{n}|a_{ni}|^{\alpha}\leq n\) for some \(\alpha>0\). Let \(b_{n}=n^{1/\alpha}(\log n)^{1/\gamma}\) for some \(\gamma>0\). Then

$$I = : \sum_{n=2}^{\infty}n^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n}E|a_{ni}X|^{\alpha}I\bigl(|a_{ni}X|>b_{n}\bigr) \leq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} CE|X|^{\alpha} & \textit{for } \alpha>\gamma,\\ CE|X|^{\alpha}\log|X| & \textit{for } \alpha=\gamma,\\ CE|X|^{\gamma} & \textit{for } \alpha< \gamma. \end{array}\displaystyle \displaystyle \right . $$

Proof

From \(\sum_{i=1}^{n}|a_{ni}|^{\alpha}\leq n\), we have

$$\begin{aligned} I =&\sum_{n=2}^{\infty}n^{-2}(\log n)^{-\alpha/\gamma}\sum_{i=1}^{n}E|a_{ni}X|^{\alpha}I \bigl(|X|^{\alpha}>n (\log n)^{\alpha/\gamma }|a_{ni}|^{-\alpha} \bigr) \\ \leq&\sum_{n=2}^{\infty}n^{-2}(\log n)^{-\alpha/\gamma}\sum_{i=1}^{n}E|a_{ni}X|^{\alpha}I \Biggl(|X|^{\alpha}>n (\log n)^{\alpha /\gamma} \Biggl(\sum _{i=1}^{n}|a_{ni}|^{\alpha} \Biggr)^{-1} \Biggr) \\ \leq&\sum_{n=2}^{\infty}n^{-2}(\log n)^{-\alpha/\gamma}\sum_{i=1}^{n}E|a_{ni}X|^{\alpha}I \bigl(|X|>(\log n)^{1/\gamma}\bigr) \\ \leq&\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\alpha/\gamma}E|X|^{\alpha }I\bigl(|X|>(\log n)^{1/\gamma}\bigr) \\ =&\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\alpha/\gamma}\sum_{m=n}^{\infty }E|X|^{\alpha}I \bigl(\log m< |X|^{\gamma}\leq\log(m+1)\bigr) \\ =&\sum_{m=2}^{\infty}E|X|^{\alpha}I\bigl( \log m< |X|^{\gamma}\leq\log (m+1)\bigr)\sum_{n=2}^{m}n^{-1}( \log n)^{-\alpha/\gamma}. \end{aligned}$$

Observing that

$$\sum_{n=2}^{m}n^{-1}(\log n)^{-\alpha/\gamma} \leq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} {{C }} & \mbox{for } \alpha>\gamma,\\ {{C \log\log m}} & \mbox{for } \alpha=\gamma,\\ {{C (\log m)^{1-\alpha/\gamma}}} & \mbox{for }\alpha< \gamma, \end{array}\displaystyle \displaystyle \right . $$

we can get

$$I \leq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} {{CE|X|^{\alpha} }} & \mbox{for } \alpha>\gamma, \\ {{CE|X|^{\alpha}\log|X|}} & \mbox{for } \alpha=\gamma, \\ {{CE|X|^{\gamma}}} & \mbox{for } \alpha< \gamma. \end{array}\displaystyle \displaystyle \right . $$

The proof of Lemma 2.4 is completed. □

Remark 2.2

Noting that

$$\sum_{n=2}^{\infty}n^{-1}\sum _{i=1}^{n}P\bigl(|a_{ni}X|>b_{n}\bigr) \leq \sum_{n=2}^{\infty}n^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n}E|a_{ni}X|^{\alpha}I\bigl(|a_{ni}X|>b_{n}\bigr), $$

we know that Lemma 2.4 improves Lemma 2.2 and Lemma 2.3 of Sung [10]. In addition, the method used in this paper is novel and much simpler than that in Sung [10].

3 Main result

In this section, we state our main results and their proofs.

Theorem 3.1

Let \(\{X_{n}, n\geq1\}\) be a sequence of m-NA random variables with \(\{X_{n}\} \prec X\), and let \(\{a_{ni}, 1\leq i\leq n, n\geq1\}\) be an array of constants satisfying (1.1) for some \(0<\alpha\leq2\). Let \(b_{n}=n^{1/\alpha}(\log n)^{1/\gamma}\) for some \(\gamma>0\). Furthermore, suppose that \(EX_{i}=0\) when \(1<\alpha\leq2\). Then the following statements hold:

  1. (i)

    If \(\alpha>\gamma\), then \(E|X|^{\alpha}<\infty\) implies (1.2).

  2. (ii)

    If \(\alpha=\gamma\), then \(E|X|^{\alpha}\log|X|<\infty\) implies (1.2).

  3. (iii)

    If \(\alpha<\gamma\), then \(E|X|^{\gamma}<\infty\) implies (1.2).

Remark 3.1

Since NA implies m-NA, Theorem 3.1 extends Theorem B. Compared with Sung [10], the proof of Theorem 3.1 is different from that of Theorem 2.1 in Sung [10].

Corollary 3.1

Let \(\{X_{n}, n\geq1\}\) be a sequence of m-NA random variables with \(\{X_{n}\} \prec X\), and let \(\{a_{i}, 1\leq i\leq n\}\) be a sequence of constants satisfying

$$A_{\alpha}=\lim_{n\rightarrow\infty}\sup A_{\alpha, n}< \infty,\qquad A_{\alpha, n}^{\alpha}=\sum_{i=1}^{n}|a_{i}|^{\alpha}/n $$

for some \(0<\alpha\leq2\). Let \(b_{n}=n^{1/\alpha}(\log n)^{1/\gamma}\) for some \(\gamma>0\). Furthermore, suppose that \(EX_{i}=0\) when \(1<\alpha\leq2\). Then

$$b_{n}^{-1}\sum_{i=1}^{n}a_{i}X_{i} \rightarrow0 \quad\textit{a.s.} $$

By a similar argument as the proof of Corollary 2.1 in Cai [6], we can prove this corollary. Here we omit the details.

Theorem 3.2

Assume that the conditions of Theorem  3.1 hold, then the following statements hold:

  1. (i)

    If \(\alpha>\gamma\), then \(E|X|^{\alpha}<\infty\) implies

    $$ \sum_{n=2}^{\infty}n^{-1}E \Biggl\{ b_{n}^{-1}\max_{1\leq m\leq n} \Biggl|\sum _{i=1}^{m}a_{ni}X_{i} \Biggr|- \varepsilon \Biggr\} _{+}^{\alpha}< \infty \quad\textit{for all } \varepsilon>0. $$
    (3.1)
  2. (ii)

    If \(\alpha=\gamma\), then \(E|X|^{\alpha}\log|X|<\infty\) implies (3.1).

  3. (iii)

    If \(\alpha<\gamma\), then \(E|X|^{\gamma}<\infty\) implies (3.1).

Remark 3.2

Noting that

$$\begin{aligned} &\sum_{n=2}^{\infty}n^{-1}E \Biggl\{ b_{n}^{-1}\max_{1\leq m\leq n} \Biggl|\sum _{i=1}^{m}a_{ni}X_{i} \Biggr|- \varepsilon \Biggr\} _{+}^{\alpha} \\ &\quad=\sum_{n=2}^{\infty}n^{-1}\int _{0}^{\infty}P \Biggl(b_{n}^{-1}\max _{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>\varepsilon+t^{1/\alpha } \Biggr)\,{\mathrm{d}}t \\ &\quad=\int_{0}^{\infty}\sum _{n=2}^{\infty}n^{-1}P \Biggl(b_{n}^{-1} \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>\varepsilon+t^{1/\alpha } \Biggr)\,{\mathrm{d}}t. \end{aligned}$$

Therefore, Theorem 3.2 extends and improves Theorem B.

Proof of Theorem 3.1

Without loss of generality, we may assume that \(a_{ni}\geq0\). For fixed \(n\geq1\), let

$$\begin{aligned}& Y_{ni}=-b_{n}I(a_{ni}X_{i}< -b_{n})+a_{ni}X_{i} I\bigl(a_{ni}|X_{i}|\leq b_{n}\bigr) +b_{n}I(a_{ni}X_{i}>b_{n}), \\& Z_{ni}=(a_{ni}X_{i}+b_{n})I(a_{ni}X_{i}< -b_{n})+ (a_{ni}X_{i}-b_{n})I(a_{ni}X_{i}>b_{n}). \end{aligned}$$

Then \(Y_{ni}+Z_{ni}=a_{ni}X_{i}\), and it follows by the definition of m-NA and Property 6 of Joag-Dev and Proschan [15] that \(\{Y_{ni}, i\geq1, n\geq1\}\) is sequence of m-NA random variables. Then

$$\begin{aligned} &\sum_{n=1}^{\infty}n^{-1}P \Biggl( \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>b_{n}\varepsilon \Biggr) \\ &\quad\leq 1 + \sum_{n=2}^{\infty}n^{-1} \sum_{i=1}^{n}P \bigl(a_{ni}|X_{i}|>b_{n} \bigr) + \sum_{n=2}^{\infty}n^{-1}P \Biggl( \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}Y_{ni} \Biggr|>b_{n}\varepsilon \Biggr) \\ &\quad=:1+H_{1}+H_{2}. \end{aligned}$$

By \(\{X_{n}\} \prec X\) and Lemma 2.4, we have

$$\begin{aligned} H_{1}\leq C\sum_{n=2}^{\infty}n^{-1} \sum_{i=1}^{n}P \bigl(a_{ni}|X|>b_{n} \bigr) \leq C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n}a_{ni}^{\alpha }E|X|^{\alpha}I \bigl(a_{ni}|X|>b_{n} \bigr)< \infty. \end{aligned}$$

Then we prove \(H_{2}<\infty\). Noting that either \(E|X|^{\alpha}\log |X|<\infty\) for \(\alpha=\gamma\), or \(E|X|^{\gamma}<\infty\) for \(\alpha <\gamma\) implies \(E|X|^{\alpha}<\infty\). From (1.1), without loss of generality, we may assume that \(\sum_{i=1}^{n}a_{ni}^{\alpha}\leq n\). We first prove

$$ L = : b_{n}^{-1}\max_{1\leq m\leq n} \Biggl|\sum _{i=1}^{m}EY_{ni} \Biggr|\rightarrow0 \quad \mbox{as } n\rightarrow\infty. $$
(3.2)

For \(0<\alpha\leq1\), by Lemma 2.3 and \(\sum_{i=1}^{n}a_{ni}^{\alpha}\leq n\), we have

$$\begin{aligned} L \leq&C b_{n}^{-1}\sum_{i=1}^{n} a_{ni}E|X|I\bigl(a_{ni}|X|\leq b_{n}\bigr) +C\sum _{i=1}^{n}P\bigl(a_{ni}|X|> b_{n}\bigr) \\ \leq&C b_{n}^{-\alpha}\sum_{i=1}^{n} a_{ni}^{\alpha}E|X|^{\alpha }I\bigl(a_{ni}|X|\leq b_{n}\bigr) +C b_{n}^{-\alpha}\sum _{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha}I\bigl(a_{ni}|X|> b_{n}\bigr) \\ \leq&C (\log n)^{-\alpha/\gamma}E|X|^{\alpha} \rightarrow0 \quad\mbox{as } n\rightarrow\infty. \end{aligned}$$

For \(1<\alpha\leq2\), by \(EX_{i}=0\), \(|Z_{ni}|\leq a_{ni}|X_{i}|I(a_{ni}|X_{i}|>b_{n})\), and Lemma 2.3, we have

$$\begin{aligned} L =&b_{n}^{-1}\max_{1\leq m\leq n} \Biggl|\sum _{i=1}^{m}EZ_{ni} \Biggr| \leq b_{n}^{-1}\sum_{i=1}^{n} a_{ni}E|X_{i}|I\bigl(a_{ni}|X_{i}|>b_{n}\bigr) \\ \leq&C b_{n}^{-1}\sum_{i=1}^{n} a_{ni}E|X|I\bigl(a_{ni}|X|>b_{n}\bigr) \leq C b_{n}^{-\alpha}\sum_{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha}I\bigl(a_{ni}|X|>b_{n}\bigr) \\ \leq&C (\log n)^{-\alpha/\gamma}E|X|^{\alpha} \rightarrow0 \quad\mbox{as } n\rightarrow\infty. \end{aligned}$$

Hence (3.2) holds for \(0<\alpha\leq2\). Then, while n is sufficiently large,

$$ \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}EY_{ni} \Biggr|\leq b_{n}\varepsilon /2. $$
(3.3)

Let \(q>\max\{2, 2\gamma/\alpha\}\). Then by (3.3), the Markov inequality, and Lemma 2.2, we have

$$\begin{aligned} H_{2} \leq&\sum_{n=2}^{\infty}n^{-1}P \Biggl(\max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}(Y_{ni}-EY_{ni}) \Biggr|>b_{n}\varepsilon/2 \Biggr) \\ \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q}E \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}(Y_{ni}-EY_{ni}) \Biggr|^{q} \\ \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \Biggl(\sum_{i=1}^{n}E|Y_{ni}|^{2} \Biggr)^{q/2}+C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \sum_{i=1}^{n}E|Y_{ni}|^{q} \\ = :&H_{3}+H_{4}. \end{aligned}$$

Firstly, we prove \(H_{3}<\infty\). By Lemma 2.3, \(\alpha\leq2\), \(\sum_{i=1}^{n}|a_{ni}|^{\alpha}\leq n\), and \(q>2\gamma/\alpha\), we have

$$\begin{aligned} H_{3} \leq&C\sum_{n=2}^{\infty}n^{-1} \Biggl(b_{n}^{-2}\sum_{i=1}^{n}a_{ni}^{2}E|X|^{2}I\bigl(a_{ni}|X| \leq b_{n}\bigr)+\sum_{i=1}^{n}P\bigl(a_{ni}|X|>b_{n}\bigr) \Biggr)^{q/2} \\ \leq&C\sum_{n=2}^{\infty}n^{-1} \Biggl(b_{n}^{-\alpha}\sum_{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha}I\bigl(a_{ni}|X| \leq b_{n}\bigr) +b_{n}^{-\alpha}\sum _{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha }I\bigl(a_{ni}|X|>b_{n}\bigr) \Biggr)^{q/2} \\ \leq&C\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\frac{\alpha q}{2\gamma }} \bigl(E|X|^{\alpha} \bigr)^{q/2}< \infty. \end{aligned}$$

Next we consider \(H_{4}\). By Lemma 2.3, we have

$$\begin{aligned} H_{4} \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \sum_{i=1}^{n}a_{ni}^{q}E|X|^{q}I\bigl(a_{ni}|X| \leq b_{n}\bigr) +C\sum_{n=2}^{\infty}n^{-1} \sum_{i=1}^{n}P\bigl(a_{ni}|X|>b_{n}\bigr) \\ = :&H_{5}+H_{6}. \end{aligned}$$

Similar to the proof of \(H_{1}<\infty\), we get directly \(H_{6}<\infty\). Then final work is to prove \(H_{5}<\infty\). For \(j\geq1\) and \(n\geq2\), let

$$I_{nj}= \bigl\{ 1\leq i\leq n: n^{1/\alpha}(j+1)^{-1/\alpha }< |a_{ni}| \leq n^{1/\alpha}j^{-1/\alpha} \bigr\} . $$

Then \(\{I_{nj}, j\geq1\}\) are disjoint, \(\bigcup_{j\geq1}I_{nj}=N\) for all \(n\geq1\) from \(\sum_{i=1}^{n}|a_{ni}|^{\alpha}\leq n\), where N is the set of positive integers. Noting that for all \(k\geq 1\), we have

$$\begin{aligned} n \geq&\sum_{i=1}^{n}|a_{ni}|^{\alpha} = \sum_{j=1}^{\infty}\sum _{i\in I_{nj}}|a_{ni}|^{\alpha} \geq \sum _{j=1}^{\infty}\sharp(I_{nj}) n (j+1)^{-1}\geq\sum_{j=k}^{\infty}\sharp(I_{nj}) n (j+1)^{-1} \\ =& \sum_{j=k}^{\infty} \sharp(I_{nj}) n (j+1)^{-q/\alpha}(j+1)^{q/\alpha-1} \geq\sum_{j=k}^{\infty}\sharp(I_{nj}) n (j+1)^{-q/\alpha }(k+1)^{q/\alpha-1}. \end{aligned}$$

Hence for all \(k\geq1\), we have

$$ \sum_{j=k}^{\infty}\sharp(I_{nj}) j^{-q/\alpha} \leq C (k+1)^{1-q/\alpha}. $$
(3.4)

Then

$$\begin{aligned} H_{5} =&\sum_{n=2}^{\infty}n^{-1-q/\alpha}( \log n)^{-q/\gamma} \sum_{i=1}^{n}|a_{ni}|^{q}E|X|^{q}I \bigl(|a_{ni}X|\leq n^{1/\alpha}(\log n)^{1/\gamma}\bigr) \\ =&\sum_{n=2}^{\infty}n^{-1-q/\alpha}(\log n)^{-q/\gamma}\sum_{j=1}^{\infty}\sum _{i\in I_{nj}} |a_{ni}|^{q}E|X|^{q}I \bigl(|a_{ni}X|\leq n^{1/\alpha}(\log n)^{1/\gamma}\bigr) \\ \leq&\sum_{n=2}^{\infty}n^{-1-q/\alpha}(\log n)^{-q/\gamma}\sum_{j=1}^{\infty} \sharp(I_{nj}) n^{q/\alpha}j^{-q/\alpha}E|X|^{q}I \bigl(|X|\leq(j+1)^{1/\alpha}(\log n)^{1/\gamma}\bigr) \\ \leq&\sum_{n=2}^{\infty}n^{-1}(\log n)^{-q/\gamma}\sum_{j=1}^{\infty } \sharp(I_{nj}) j^{-q/\alpha}E|X|^{q}I\bigl(|X|\leq(\log n)^{1/\gamma}\bigr) \\ &{}+\sum_{n=2}^{\infty}n^{-1}(\log n)^{-q/\gamma}\sum_{j=1}^{\infty } \sharp(I_{nj}) j^{-q/\alpha}\\ &{}\times\sum_{k=1}^{j}E|X|^{q}I \bigl(k^{1/\alpha}(\log n)^{1/\gamma }< |X|\leq(k+1)^{1/\alpha}(\log n)^{1/\gamma}\bigr) \\ = :&H_{5}^{*}+H_{5}^{**}. \end{aligned}$$

By (3.4) and \(q>2\gamma/\alpha\geq\gamma\), we have

$$\begin{aligned} H_{5}^{*} \leq&C\sum_{n=2}^{\infty}n^{-1}( \log n)^{-q/\gamma }E|X|^{q}I\bigl(|X|^{\gamma}\leq\log n\bigr) \\ =&C\sum_{n=2}^{\infty}n^{-1}(\log n)^{-q/\gamma}\sum_{m=2}^{n}E|X|^{q}I \bigl(\log(m-1)< |X|^{\gamma}\leq\log m\bigr) \\ =&C\sum_{m=2}^{\infty}E|X|^{q}I \bigl(\log(m-1)< |X|^{\gamma}\leq\log m\bigr)\sum _{n=m}^{\infty}n^{-1}(\log n)^{-q/\gamma} \\ \leq&C\sum_{m=2}^{\infty}(\log m)^{1-q/\gamma}E|X|^{q}I\bigl(\log (m-1)< |X|^{\gamma}\leq\log m\bigr) \\ \leq&C E|X|^{\gamma}< \infty. \end{aligned}$$

By (3.4), we have

$$\begin{aligned} H_{5}^{**} =&\sum_{n=2}^{\infty}n^{-1}( \log n)^{-q/\gamma}\\ &{}\times\sum_{k=1}^{\infty}E|X|^{q}I \bigl(k^{1/\alpha}(\log n)^{1/\gamma}< |X| \leq(k+1)^{1/\alpha}(\log n)^{1/\gamma}\bigr)\sum_{j=k}^{\infty}\sharp (I_{nj}) j^{-q/\alpha} \\ \leq&C\sum_{n=2}^{\infty}n^{-1}(\log n)^{-q/\gamma}\\ &{}\times\sum_{k=1}^{\infty }(k+1)^{1-q/\alpha}E|X|^{q}I \bigl(k^{1/\alpha}(\log n)^{1/\gamma}< |X| \leq(k+1)^{1/\alpha}(\log n)^{1/\gamma}\bigr) \\ \leq&C\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\alpha/\gamma}\sum_{k=1}^{\infty}E|X|^{\alpha}I \bigl(k^{1/\alpha}(\log n)^{1/\gamma}< |X| \leq(k+1)^{1/\alpha}(\log n)^{1/\gamma}\bigr) \\ =&C\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\alpha/\gamma}E|X|^{\alpha }I\bigl(|X|>(\log n)^{1/\gamma}\bigr). \end{aligned}$$

Noting that we obtain the following result in the proof of Lemma 2.4,

$$\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\alpha/\gamma}E|X|^{\alpha }I\bigl(|X|>(\log n)^{1/\gamma}\bigr) \leq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} {{CE|X|^{\alpha} }} & \mbox{for } \alpha>\gamma, \\ {{CE|X|^{\alpha}\log|X|}} & \mbox{for } \alpha=\gamma, \\ {{CE|X|^{\gamma}}} & \mbox{for } \alpha< \gamma. \end{array}\displaystyle \displaystyle \right . $$

Hence we get \(H_{5}^{**}<\infty\) combining the assumptions of Theorem 3.1. The proof is completed. □

Remark 3.3

It is easily seen that the proof of \(H_{5}<\infty\) complements Lemma 2.3 of Sung [9]. In fact, in that lemma Sung only proved \(H_{5}<\infty\) for the case \(\alpha=\gamma\). It is worthy to point out that \(a_{ni}=0\) or \(|a_{ni}|>1\) is required in Sung [9]. Here, we do not require the extra conditions.

Remark 3.4

Sung [9] proved Theorem 3.1 for the case \(\alpha=\gamma\) when \(\{X_{n}, n\geq1\}\) is a sequence of \(\rho^{*}\)-mixing random variables. However, he posed an open problem, that is, whether Theorem 3.1 (i.e. Theorem 1.1 in Sung [9]) remains true for \(\rho^{*}\)-mixing random variables.

The crucial tool of the proof of Theorem 3.1 is the Rosenthal-type inequality for maximum partial sums of m-NA random variables. For \(\rho ^{*}\)-mixing random variables, the Rosenthal-type inequality for maximum partial sums also holds (see Utev and Peligrad [29]). Therefore, it is easy to solve the above open problem by following the method used in the proof of Theorem 3.1.

Proof of Theorem 3.2

For any given \(\varepsilon>0\), we have

$$\begin{aligned} &\sum_{n=2}^{\infty}n^{-1}E \Biggl\{ b_{n}^{-1}\max_{1\leq m\leq n} \Biggl|\sum _{i=1}^{m}a_{ni}X_{i} \Biggr|- \varepsilon \Biggr\} _{+}^{\alpha} \\ &\quad=\sum_{n=2}^{\infty}n^{-1}\int _{0}^{\infty}P \Biggl(b_{n}^{-1}\max _{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>\varepsilon+t^{1/\alpha } \Biggr)\,{\mathrm{d}}t \\ &\quad\leq\sum_{n=2}^{\infty}n^{-1}P \Biggl(\max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>b_{n}\varepsilon \Biggr) +\sum_{n=2}^{\infty}n^{-1} \int_{1}^{\infty}P \Biggl(\max_{1\leq m\leq n} \Biggl| \sum_{i=1}^{m}a_{ni}X_{i} \Biggr|>b_{n}t^{1/\alpha} \Biggr)\,{\mathrm {d}}t \\ &\quad=:I_{1}+I_{2}. \end{aligned}$$

Therefore, to prove (3.1), one needs only to prove that \(I_{1}<\infty\) and \(I_{2}<\infty\). By Theorem 3.1, we get directly \(I_{1}<\infty\). For all \(t\geq1\), we denote

$$\begin{aligned}& Y_{ni}=-b_{n} t^{1/\alpha}I\bigl(a_{ni}X_{i}< -b_{n} t^{1/\alpha}\bigr)+a_{ni}X_{i} I\bigl(a_{ni}|X_{i}| \leq b_{n} t^{1/\alpha}\bigr)+b_{n} t^{1/\alpha}I \bigl(a_{ni}X_{i}>b_{n} t^{1/\alpha}\bigr), \\& Z_{ni}=a_{ni}X_{i}-Y_{ni}. \end{aligned}$$

Then

$$\begin{aligned} I_{2} \leq&\sum_{n=2}^{\infty}n^{-1} \int_{1}^{\infty}P \Bigl(\max_{1\leq i\leq n}a_{ni}|X_{i}|>b_{n}t^{1/\alpha} \Bigr)\,{\mathrm{d}}t \\ &{}+\sum_{n=2}^{\infty}n^{-1}\int _{1}^{\infty}P \Biggl(\max_{1\leq m\leq n} \Biggl|\sum _{i=1}^{m}Y_{ni} \Biggr|>b_{n}t^{1/\alpha} \Biggr)\,{\mathrm{d}}t \\ = :&I_{3}+I_{4}. \end{aligned}$$

Noting that

$$\int_{1}^{\infty}P \bigl(a_{ni}|X|>b_{n}t^{1/\alpha} \bigr)\,{\mathrm{d}}t \leq b_{n}^{-\alpha}a_{ni}^{\alpha}E|X|^{\alpha}I\bigl(|a_{ni}X|>b_{n}\bigr), $$

by \(\{X_{i}\} \prec X\), Lemma 2.4, and the assumptions of Theorem 3.2, we have

$$\begin{aligned} I_{3} \leq&\sum_{n=2}^{\infty}n^{-1} \sum_{i=1}^{n}\int_{1}^{\infty}P \bigl(a_{ni}|X_{i}|>b_{n}t^{1/\alpha} \bigr) \,{\mathrm{d}}t \leq\sum_{n=2}^{\infty}n^{-1}\sum _{i=1}^{n}\int_{1}^{\infty}P \bigl(a_{ni}|X|>b_{n}t^{1/\alpha} \bigr)\,{\mathrm{d}}t \\ \leq&\sum_{n=2}^{\infty}n^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n}a_{ni}^{\alpha }E|X|^{\alpha}I\bigl(a_{ni}|X|>b_{n}\bigr)< \infty. \end{aligned}$$

Next we prove that \(I_{4}<\infty\). We first show

$$ J = \sup_{t\geq1}t^{-1/\alpha}b_{n}^{-1} \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}EY_{ni} \Biggr|\rightarrow0\quad\mbox{as } n\rightarrow\infty. $$
(3.5)

For \(0<\alpha\leq1\), by Lemma 2.3 and \(\sum_{i=1}^{n}a_{ni}^{\alpha}\leq n\), we have

$$\begin{aligned} J \leq&C\sup_{t\geq1}t^{-1/\alpha}b_{n}^{-1} \sum_{i=1}^{n} a_{ni}E|X|I \bigl(a_{ni}|X|\leq b_{n} t^{1/\alpha}\bigr) +C\sup _{t\geq1}\sum_{i=1}^{n}P \bigl(a_{ni}|X|> b_{n} t^{1/\alpha}\bigr) \\ \leq&C\sup_{t\geq1}t^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n} a_{ni}^{\alpha }E|X|^{\alpha}I \bigl(a_{ni}|X|\leq b_{n} t^{1/\alpha}\bigr) +C b_{n}^{-\alpha}\sum_{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha} \\ \leq&C (\log n)^{-\alpha/\gamma}E|X|^{\alpha} \rightarrow0 \quad\mbox{as } n\rightarrow\infty. \end{aligned}$$

For \(1<\alpha\leq2\), by \(EX_{i}=0\), \(|Z_{ni}|\leq a_{ni}|X_{i}|I(a_{ni}|X_{i}|>b_{n} t^{1/\alpha})\), and Lemma 2.3, we have

$$\begin{aligned} J =&\sup_{t\geq1}t^{-1/\alpha}b_{n}^{-1} \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}EZ_{ni} \Biggr| \\ \leq&C\sup_{t\geq1}t^{-1/\alpha}b_{n}^{-1} \sum_{i=1}^{n} a_{ni}E|X|I \bigl(a_{ni}|X|>b_{n} t^{1/\alpha}\bigr) \\ \leq&C\sup_{t\geq1}t^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n}a_{ni}^{\alpha }E|X|^{\alpha}I \bigl(a_{ni}|X|>b_{n} t^{1/\alpha}\bigr) \\ \leq&C (\log n)^{-\alpha/\gamma}E|X|^{\alpha} \rightarrow0 \quad\mbox{as } n\rightarrow\infty. \end{aligned}$$

From (3.5), we know that, while n is sufficiently large,

$$ \max_{1\leq m\leq n} \Biggl|\sum_{i=1}^{m}EY_{ni} \Biggr|\leq b_{n} t^{1/\alpha}/2 $$
(3.6)

holds uniformly for \(t\geq1\).

Let \(q>\max\{2, 2\gamma/\alpha\}\). Then by (3.6) and Lemma 2.2, we have

$$\begin{aligned} I_{4} \leq&\sum_{n=2}^{\infty}n^{-1} \int_{1}^{\infty}P \Biggl(\max_{1\leq m\leq n} \Biggl| \sum_{i=1}^{m} (Y_{ni}-EY_{ni} ) \Biggr|>b_{n}t^{1/\alpha}/2 \Biggr)\,{\mathrm{d}}t \\ \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \int_{1}^{\infty}t^{-q/\alpha} E\max _{1\leq m\leq n} \Biggl|\sum_{i=1}^{m} (Y_{ni}-EY_{ni} ) \Biggr|^{q}\,{\mathrm{d}}t \\ \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \int_{1}^{\infty}t^{-q/\alpha} \sum _{i=1}^{n}E|Y_{ni}|^{q}\,{ \mathrm{d}}t+C\sum_{n=2}^{\infty }n^{-1}b_{n}^{-q} \int_{1}^{\infty}t^{-q/\alpha} \Biggl(\sum _{i=1}^{n}E|Y_{ni}|^{2} \Biggr)^{q/2}\,{\mathrm{d}}t \\ = :&I_{5}+I_{6}. \end{aligned}$$

By Lemma 2.3, \(\alpha\leq2\), and \(q>2\gamma/\alpha\), we have

$$\begin{aligned} I_{6} \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \int_{1}^{\infty}t^{-q/\alpha} \Biggl(\sum _{i=1}^{n}a_{ni}^{2}E|X|^{2} I\bigl(a_{ni}|X|\leq b_{n} t^{1/\alpha}\bigr) \\ &{}+b_{n}^{2} t^{2/\alpha}\sum _{i=1}^{n}P\bigl(a_{ni}|X|> b_{n} t^{1/\alpha}\bigr) \Biggr)^{q/2}\,{\mathrm{d}}t \\ \leq&C\sum_{n=2}^{\infty}n^{-1}\int _{1}^{\infty} \Biggl(b_{n}^{-\alpha}t^{-1} \sum_{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha} I\bigl(a_{ni}|X|\leq b_{n} t^{1/\alpha}\bigr) \\ &{}+b_{n}^{-\alpha}t^{-1}\sum _{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha }I \bigl(a_{ni}|X|> b_{n} t^{1/\alpha}\bigr) \Biggr)^{q/2}\,{\mathrm{d}}t \\ \leq&C\sum_{n=2}^{\infty}n^{-1}\int _{1}^{\infty}t^{-q/2} \Biggl(b_{n}^{-\alpha} \sum_{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha} \Biggr)^{q/2}\,{\mathrm{d}}t \\ \leq&C\sum_{n=2}^{\infty}n^{-1}(\log n)^{-\alpha q/(2\gamma)} \bigl(E|X|^{\alpha} \bigr)^{q/2} < \infty. \end{aligned}$$

For \(I_{5}\), we have

$$\begin{aligned} I_{5} \leq&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \sum_{i=1}^{n}\int_{1}^{\infty }t^{-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(a_{ni}|X|\leq b_{n} t^{1/\alpha}\bigr)\,{ \mathrm{d}}t \\ &{}+C\sum_{n=2}^{\infty}n^{-1}\sum _{i=1}^{n}\int_{1}^{\infty}P \bigl(a_{ni}|X|> b_{n} t^{1/\alpha}\bigr)\,{\mathrm{d}}t \\ =&C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \sum_{i=1}^{n}\int_{1}^{\infty }t^{-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(a_{ni}|X|\leq b_{n}\bigr)\,{\mathrm{d}}t \\ &{}+C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-q} \sum_{i=1}^{n}\int_{1}^{\infty }t^{-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}< a_{ni}|X|\leq b_{n} t^{1/\alpha} \bigr)\,{\mathrm{d}}t \\ &{}+C\sum_{n=2}^{\infty}n^{-1}\sum _{i=1}^{n}\int_{1}^{\infty}P \bigl(a_{ni}|X|> b_{n} t^{1/\alpha}\bigr)\,{\mathrm{d}}t \\ = :&I_{7}+I_{8}+I_{9}. \end{aligned}$$

Similar to the proof of \(I_{3}<\infty\), we get \(I_{9}<\infty\). Similar to the proof of \(H_{5}<\infty\), we get \(I_{7}<\infty\). By \(q>2\geq\alpha\) and the following standard arguments, we get

$$\begin{aligned} &b_{n}^{-q}\int_{1}^{\infty}t^{-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}< a_{ni}|X|\leq b_{n} t^{1/\alpha} \bigr)\,{\mathrm{d}}t \\ &\quad\leq b_{n}^{-q}\sum_{m=1}^{\infty} \int_{m}^{m+1}t^{-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}< a_{ni}|X|\leq b_{n} t^{1/\alpha} \bigr)\,{\mathrm{d}}t \\ &\quad\leq b_{n}^{-q}\sum_{m=1}^{\infty}m^{-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}< a_{ni}|X|\leq b_{n} (m+1)^{1/\alpha}\bigr) \\ &\quad\leq b_{n}^{-q}\sum_{m=1}^{\infty}m^{-q/\alpha} \sum_{s=1}^{m}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}s^{1/\alpha}< a_{ni}|X|\leq b_{n} (s+1)^{1/\alpha}\bigr) \\ &\quad\leq b_{n}^{-q}\sum_{s=1}^{\infty}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}s^{1/\alpha}< a_{ni}|X|\leq b_{n} (s+1)^{1/\alpha}\bigr)\sum_{m=s}^{\infty }m^{-q/\alpha} \\ &\quad\leq C b_{n}^{-q}\sum_{s=1}^{\infty}s^{1-q/\alpha}a_{ni}^{q}E|X|^{q} I\bigl(b_{n}s^{1/\alpha}< a_{ni}|X|\leq b_{n} (s+1)^{1/\alpha}\bigr) \\ &\quad\leq C b_{n}^{-\alpha}\sum_{s=1}^{\infty}a_{ni}^{\alpha}E|X|^{\alpha} I\bigl(b_{n}s^{1/\alpha}< a_{ni}|X|\leq b_{n} (s+1)^{1/\alpha}\bigr) \\ &\quad\leq C b_{n}^{-\alpha}a_{ni}^{\alpha}E|X|^{\alpha}I\bigl(a_{ni}|X|>b_{n}\bigr). \end{aligned}$$

Hence by Lemma 2.4, we have

$$I_{8} \leq C\sum_{n=2}^{\infty}n^{-1}b_{n}^{-\alpha} \sum_{i=1}^{n}a_{ni}^{\alpha}E|X|^{\alpha}I\bigl(a_{ni}|X|>b_{n}\bigr)< \infty. $$

The proof is completed. □

References

  1. Cuzick, J: A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 8, 625-641 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  2. Wu, WB: On the strong convergence of a weighted sum. Stat. Probab. Lett. 44, 19-22 (1999)

    Article  MATH  Google Scholar 

  3. Bai, ZD, Cheng, PE: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 46, 105-112 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  4. Sung, SH: Strong laws for weighted sums of i.i.d. random variables. Stat. Probab. Lett. 52, 413-419 (2001)

    Article  MATH  Google Scholar 

  5. Chen, PY, Gan, SX: Limiting behavior of weighted sums of i.i.d. random variables. Stat. Probab. Lett. 77, 1589-1599 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  6. Cai, GH: Strong laws for weighted sums of NA random variables. Metrika 68, 323-331 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  7. Wu, QY: Complete convergence for weighted sums of sequences of negatively dependent random variables. J. Probab. Stat. 2011, Article ID 202015 (2011)

    Google Scholar 

  8. Zarei, H, Jabbari, H: Complete convergence of weighted sums under negative dependence. Stat. Pap. 52, 413-418 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  9. Sung, SH: On the strong convergence for weighted sums of \(\rho^{*}\)-mixing random variables. Stat. Pap. 54, 773-781 (2013)

    Article  MATH  Google Scholar 

  10. Sung, SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 52, 447-454 (2011)

    Article  MATH  Google Scholar 

  11. Shen, AT: On the strong convergence rate for weighted sums of arrays of rowwise negatively orthant dependent random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. (2012). doi:10.1007/s13398-012-0067-5

    Google Scholar 

  12. Chen, PY, Sung, SH: On the strong convergence for weighted sums of negatively associated random variables. Stat. Probab. Lett. 92, 45-52 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  13. Hsu, PL, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. 33, 25-31 (1947)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chow, YS: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177-201 (1988)

    MATH  Google Scholar 

  15. Joag-Dev, K, Proschan, F: Negative association of random variables with applications. Ann. Stat. 11, 286-295 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  16. Hu, YJ, Ming, RX, Yang, WQ: Large deviations and moderate deviations for m-negatively associated random variables. Acta Math. Sci. 27, 886-896 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  17. Matula, P: A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 15, 209-213 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  18. Su, C, Zhao, LC, Wang, YB: Moment inequalities and weak convergence for negatively associated sequences. Sci. China Ser. A 40, 172-182 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  19. Shao, QM: A comparison theorem on maximum inequalities between negatively associated and independent random variables. J. Theor. Probab. 13, 343-356 (2000)

    Article  MATH  Google Scholar 

  20. Gan, SX, Chen, PY: On the limiting behavior of the maximum partial sums for arrays of rowwise NA random variables. Acta Math. Sci. 27, 283-290 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  21. Fu, KA, Zhang, LX: Precise rates in the law of the logarithm for negatively associated random variables. Comput. Math. Appl. 54, 687-698 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  22. Baek, JI, Choi, IB, Niu, SL: On the complete convergence of weighted sums for arrays of negatively associated variables. J. Korean Stat. Soc. 37, 73-80 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  23. Chen, PY, Hu, TC, Liu, X, Volodin, A: On complete convergence for arrays of rowwise negatively associated random variables. Theory Probab. Appl. 52, 323-328 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  24. Xing, GD: On the exponential inequalities for strictly stationary and negatively associated random variables. J. Stat. Plan. Inference 139, 3453-3460 (2009)

    Article  MATH  Google Scholar 

  25. Qin, YS, Li, YH: Empirical likelihood for linear models under negatively associated errors. J. Multivar. Anal. 102, 153-163 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  26. Wu, YF: Convergence properties of the maximal partial sums for arrays of rowwise NA random variables. Theory Probab. Appl. 56, 527-535 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  27. Hu, TC, Chiang, CY, Taylor, RL: On complete convergence for arrays of rowwise m-negatively associated random variables. Nonlinear Anal., Theory Methods Appl. 71, e1075-e1081 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  28. Wang, XJ, Li, XQ, Yang, WZ, Hu, SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25, 1916-1920 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  29. Utev, S, Peligrad, M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16, 101-115 (2003)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referee for carefully reading the manuscript and for offering comments which enabled them to improve the paper. The research of Y Wu was supported by the Natural Science Foundation of Anhui Province (1308085MA03, 1408085MA03), the Key Grant Project for Backup Academic Leaders of Tongling University (2014tlxyxs21) and the Key NSF of Anhui Educational Committee (KJ2014A255).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongfeng Wu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Y., Hu, TC. & Volodin, A. Complete convergence and complete moment convergence for weighted sums of m-NA random variables. J Inequal Appl 2015, 200 (2015). https://doi.org/10.1186/s13660-015-0717-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0717-1

MSC

Keywords