Open Access

Complete convergence and complete moment convergence for arrays of rowwise ANA random variables

Journal of Inequalities and Applications20162016:72

https://doi.org/10.1186/s13660-016-1016-1

Received: 30 August 2015

Accepted: 8 February 2016

Published: 20 February 2016

Abstract

In this article, we investigate complete convergence and complete moment convergence for weighted sums of arrays of rowwise asymptotically negatively associated (ANA) random variables. The results obtained not only generalize the corresponding ones of Sung (Stat. Pap. 52:447-454, 2011), Zhou et al. (J. Inequal. Appl. 2011:157816, 2011), and Sung (Stat. Pap. 54:773-781, 2013) to the case of ANA random variables, but also improve them.

Keywords

arrays of rowwise ANA random variables complete convergence complete moment convergence weighted sums

MSC

60F15

1 Introduction

Recently, Sung [1] proved the following strong laws of large numbers for weighted sums of identically distributed negatively associated (NA) random variables.

Theorem A

Let \(\{ {{X}_{n}},n\ge1 \}\) be a sequence of identically distributed NA random variables, and let \(\{ {{a}_{ni}},1\le i\le n,n\ge1 \}\) be an array of real constants satisfying
$$ \sum_{i=1}^{n}{{{\vert {{a}_{ni}} \vert }^{\alpha}}}=O ( n ) $$
(1.1)
for some \(0<\alpha\le2\). Let \({{b}_{n}}={{n}^{1/\alpha}}{{ ( \log n )}^{1/\gamma}}\) for some \(\gamma>0\). Furthermore, suppose that \(E{{X}_{1}}=0\) for \(1<\alpha\le2\). If
$$ \begin{aligned} &E|X_{1}|^{\alpha}< \infty \quad \textit{for } \alpha>\gamma, \\ &E|X_{1}|^{\alpha}\log\bigl(1+\vert X_{1}\vert \bigr)< \infty\quad \textit{for } \alpha=\gamma, \\ &E|X_{1}|^{\gamma}< \infty \quad \textit{for } \alpha< \gamma, \end{aligned} $$
(1.2)
then
$$ \sum_{n=1}^{\infty}{ \frac{1}{n}}P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{a}_{ni}} {{X}_{i}}} \Biggr\vert >\varepsilon{{b}_{n}} \Biggr)< \infty \quad \textit{for all } \varepsilon>0. $$
(1.3)

Zhou et al. [2] partially extended Theorem A for NA random variables to the case of ρ̃-mixing (or \({{\rho }^{*}}\)-mixing) random variables by using a different method.

Theorem B

Let \(\{ {{X}_{n}},n\ge1 \}\) be a sequence of identically distributed ρ̃-mixing random variables, and let \(\{ {{a}_{ni}},1\le i\le n,n\ge1 \}\) be an array of real constants satisfying
$$ \sum_{i=1}^{n}{{{\vert {{a}_{ni}} \vert }^{\max \{ \alpha,\gamma \}}}}=O ( n ) $$
(1.4)
for some \(0<\alpha\le2\) and \(\gamma>0\) with \(\alpha\ne\gamma\). Let \({{b}_{n}}={{n}^{1/\alpha}}{{ ( \log n )}^{1/\gamma}}\). Assume that \(E{{X}_{1}}=0\) for \(1<\alpha\le2\). If (1.2) is satisfied for \(\alpha\ne\gamma\), then (1.3) holds.

Zhou et al. [2] left an open problem whether the case \(\alpha =\gamma\) of Theorem A holds for ρ̃-mixing random variables. Sung [3] solved this problem and obtained the following result.

Theorem C

Let \(\{ {{X}_{n}},n\ge1 \}\) be a sequence of identically distributed ρ̃-mixing random variables, and let \(\{ {{a}_{ni}},1\le i\le n,n\ge1 \}\) be an array of real constants satisfying (1.4) for some \(0<\alpha\le2\) and \(\gamma>0\) with \(\alpha=\gamma\). Let \({{b}_{n}}={{n}^{1/\alpha }}{{ ( \log n )}^{1/\alpha}}\). Assume that \(E{{X}_{1}}=0\) for \(1<\alpha\le2\). If (1.2) is satisfied for \(\alpha=\gamma\), then (1.3) holds.

Inspired by these theorems, in this paper, we further investigate the limit convergence properties and obtain some much stronger conclusions, which extend and improve Theorems A, B, and C to a wider class of dependent random variables under the same conditions.

Now we introduce some definitions of dependent structures.

Definition 1.1

A finite family of random variables \(\{ {{X}_{i}},1\le i\le n \}\) is said to be NA if for any disjoint subsets A and B of \(\{ 1,2,\ldots,n \}\),
$$ \operatorname{Cov} \bigl( {{f}_{1}} ( {{X}_{i}},i\in A ),{{f}_{2}} ( {{X}_{j}},j\in B ) \bigr)\le0 $$
(1.5)
whenever \({{f}_{1}}\) and \({{f}_{2}}\) are any real coordinatewise nondecreasing functions such that this covariance exists. An infinite family of random variables \(\{ {{X}_{n}},n\ge1 \}\) is NA if every finite its subfamily is NA.

Definition 1.2

A sequence of random variables \(\{ {{X}_{n}},n\ge1 \}\) is called ρ̃-mixing if for some integer \(n\geq1\), the mixing coefficient
$$ \tilde{\rho} ( s )=\sup \bigl\{ \rho ( S,T ):S,T\subset\mathrm{N}, \operatorname{dist} (S,T )\ge s \bigr\} \to 0 \quad \text{as } s\to\infty, $$
(1.6)
where
$$ \rho ( S,T )=\sup \biggl\{ \frac{\vert EXY-EXEY \vert }{\sqrt{\operatorname{Var}X}\cdot\sqrt{\operatorname{Var}Y}};X\in {{L}_{2}} \bigl( \sigma(S) \bigr),Y\in{{L}_{2}} \bigl( \sigma(T) \bigr) \biggr\} , $$
(1.7)
and \(\sigma(S)\) and \(\sigma(T)\) are the σ-fields generated by \(\{ {{X}_{i}},i\in S \}\) and \(\{ {{X}_{i}},i\in T \}\), respectively.

Definition 1.3

A sequence of random variables \(\{ {{X}_{n}},n\ge1 \}\) is said to be ANA if
$$ {{\rho}^{-}} (s )=\sup \bigl\{ {{\rho}^{-}} (S,T ):S,T\subset\mathrm{N}, \operatorname{dist} (S,T )\ge s \bigr\} \to 0\quad \text{as }s\to\infty, $$
(1.8)
where
$$ {{\rho}^{-}} (S,T )=0\vee \biggl\{ \frac{{\operatorname{Cov}} ( f ( {{X}_{i}},i\in S ),g ( {{X}_{j}},j\in T ) )}{{{ ( \operatorname{Var}f ( {{X}_{i}},i\in S ) )}^{1/2}}{{ ( \operatorname{Var}g ( {{X}_{j}},j\in T ) )}^{1/2}}},f,g \in C \biggr\} , $$
(1.9)
and C is the set of nondecreasing functions.

An array of random variables \(\{ {{X}_{ni}},i\ge1,n\ge1 \} \) is said to be rowwise ANA random variables if for every \(n\ge1\), \(\{ {{X}_{ni}},i\ge1 \}\) is a sequence of ANA random variables.

It is obvious to see that \({{\rho}^{-}} ( s )\le\tilde{\rho } ( s )\) and that a sequence of ANA random variables is NA if and only if \({{\rho}^{-}} ( 1 )=0\). So, ANA random variables include ρ̃-mixing and NA random variables. Consequently, the study of the limit convergence properties for ANA random variables is of much interest. Since the concept of ANA random variables was introduced by Zhang and Wang [4], many applications have been found. For example, Zhang and Wang [4] and Zhang [5, 6] obtained moment inequalities for partial sums, the central limit theorems, the complete convergence, and the strong law of large numbers, Wang and Lu [7] established some inequalities for the maximum of partial sums and weak convergence, Wang and Zhang [8] obtained the law of the iterated logarithm, Liu and Liu [9] showed moments of the maximum of normed partial sums, Yuan and Wu [10] obtained the limiting behavior of the maximum of partial sums, Budsaba et al. [11] investigated the complete convergence for moving-average process based on a sequence of ANA and NA random variables, Tan et al. [12] obtained the almost sure central limit theorem, Ko [13] obtained the Hájek-Rényi inequality and the strong law of large numbers, Zhang [14] established the complete moment convergence for moving-average process generated by ANA random variables, and so forth.

In this work, we further study the strong convergence for weighted sums of arrays of ANA random variables without assumption of identical distribution and obtain some improved results (i.e., the so-called complete moment convergence, which will be introduced in Definition 1.5). As applications, the complete convergence theorems for weighted sums of arrays of identically distributed NA and ρ̃-mixing random variables can been obtained. The results obtained not only generalize the corresponding ones of Sung [1, 3] and Zhou et al. [2], but also improve them under the same conditions.

For the proofs of the main results, we need to restate some definitions used in this paper.

Definition 1.4

A sequence of random variables \(\{ {{X}_{n}},n\ge1 \}\) converges completely to a constant λ if for all \(\varepsilon>0\),
$$ \sum_{n=1}^{\infty}{P \bigl( \vert {{X}_{n}}-\lambda \vert >\varepsilon \bigr)}< \infty. $$
This notion was first introduced by Hsu and Robbins [15].

Definition 1.5

(Chow [16])

Let \(\{ {{X}_{n}},n\ge1 \}\) be a sequence of random variables, and \({{a}_{n}}>0\), \({{b}_{n}}>0\), \(q>0\). If for all \(\varepsilon\ge0\),
$$ \sum_{n=1}^{\infty}{{{a}_{n}}E \bigl( b_{n}^{-1}\vert {{X}_{n}} \vert -\varepsilon \bigr)_{+}^{q}}< \infty, $$
then the sequence \(\{ {{X}_{n}},n\ge1 \}\) is said to satisfy the complete moment convergence.

Definition 1.6

A sequence of random variables \(\{ {{X}_{n}},n\ge1 \}\) is said to be stochastically dominated by a random variable X if there exists a positive constant C such that
$$ P \bigl( \vert {{X}_{n}} \vert \ge x \bigr)\le CP \bigl( \vert X \vert \ge x \bigr) $$
for all \(x\ge0\) and \(n\ge1\).
An array of rowwise random variables \(\{ {{X}_{ni}},i\ge1,n\ge1 \}\) is said to be stochastically dominated by a random variable X if there exists a positive constant C such that
$$ P \bigl( \vert {{X}_{ni}} \vert \ge x \bigr)\le CP \bigl( \vert X \vert \ge x \bigr) $$
for all \(x\ge0\), \(i\ge1\) and \(n\ge1\).

Throughout this paper, \(I ( A )\) is the indicator function of a set A. The symbol C denotes a positive constant, which may be different in various places, and \({{a}_{n}}=O ( {{b}_{n}} )\) means that \({{a}_{n}}\le C{{b}_{n}}\).

2 Main results

Now, we state our main results. The proofs will be given in Section 3.

Theorem 2.1

Let \(\{ {{X}_{ni}},i\ge1,n\ge1 \}\) be an array of rowwise ANA random variables stochastically dominated by a random variable X, let \(\{ {{a}_{ni}},1\le i\le n,n\ge1 \}\) be an array of constants satisfying (1.1) for some \(0<\alpha\le2\), and let \({{b}_{n}}={{n}^{1/\alpha}}{{ ( \log n )}^{1/\gamma}}\) for some \(\gamma>0\). Furthermore, assume that \(E{{X}_{ni}}=0\) for \(1<\alpha\le2\). If
$$ \begin{aligned} &E|X|^{\alpha}< \infty \quad \textit{for } \alpha>\gamma, \\ &E|X|^{\alpha}\log\bigl(1+|X|\bigr)< \infty \quad \textit{for } \alpha=\gamma, \\ &E|X|^{\gamma}< \infty \quad \textit{for } \alpha< \gamma, \end{aligned} $$
(2.1)
then
$$ \sum_{n=1}^{\infty}{ \frac{1}{n}}P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >\varepsilon{{b}_{n}} \Biggr)< \infty \quad \textit{for all } \varepsilon>0. $$
(2.2)

Theorem 2.2

Let \(\{ {{X}_{ni}},i\ge1,n\ge1 \}\) be an array of rowwise ANA random variables which is stochastically dominated by a random variable X, let \(\{ {{a}_{ni}},1\le i\le n,n\ge1 \}\) be an array of real constants satisfying (1.1) for some \(0<\alpha\le2\), and let \({{b}_{n}}={{n}^{1/\alpha}}{{ ( \log n )}^{1/\gamma}}\) for some \(\gamma>0\). Furthermore, assume that \(E{{X}_{ni}}=0\) for \(1<\alpha\le2\). If (2.1) holds, then, for \(0< q<\alpha\),
$$ \sum_{n=1}^{\infty}{ \frac{1}{n}}E \Biggl( \frac {1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert -\varepsilon \Biggr)_{+}^{q}< \infty \quad \textit{for all } \varepsilon>0. $$
(2.3)

Remark 2.1

Since ANA includes NA and ρ̃-mixing, Theorem 2.1 extends Theorem A for identically distributed NA random variables and Theorems B and C for identically distributed ρ̃-mixing random variables (by taking \({{X}_{ni}}={{X}_{i}}\) in Theorem 2.1).

Remark 2.2

Under the conditions of Theorem 2.2, we can obtain that
$$\begin{aligned} \infty >& \sum_{n=1}^{\infty}{ \frac{1}{n}}E \Biggl( \frac {1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert -\varepsilon \Biggr)_{+}^{q} \\ =& \sum_{n=1}^{\infty}{\frac{1}{n}} \int_{0}^{\infty}{P \Biggl( \frac{1}{{{b}_{n}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert -\varepsilon >{{t}^{1/q}} \Biggr)d}t \\ \ge& C \int_{0}^{1}{\sum_{n=1}^{\infty}{ \frac{1}{n}}P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >{{b}_{n}}\varepsilon \Biggr)}\,dt \\ =& C\sum_{n=1}^{\infty}{\frac{1}{n}}P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >{{b}_{n}}\varepsilon \Biggr) \quad \text{for all } \varepsilon>0. \end{aligned}$$
(2.4)

Hence, from (2.4) we get that the complete moment convergence implies the complete convergence. Compared with the results of Sung [1, 3] and Zhou et al. [2], it is worth pointing out that our main result is much stronger under the same conditions. So, Theorem 2.2 is an extension and improvement of the corresponding ones of Sung [1, 3] and Zhou et al. [2].

3 Proofs

To prove the main results, we need the following lemmas.

Lemma 3.1

(Wang and Lu [7])

Let \(\{ {{X}_{n}},n\ge1 \}\) be a sequence of ANA random variables. If \(\{ {{f}_{n}},n\ge1 \}\) is a sequence of real nondecreasing (or nonincreasing) functions, then \(\{ {{f}_{n}} ( {{X}_{n}} ),n\ge1 \}\) is still a sequence of ANA random variables.

From Wang and Lu’s [7] Rosenthal-type inequality for ANA random variables we obtain the following result.

Lemma 3.2

(Wang and Lu [7])

For a positive integer \(\mathrm{N} \ge1\) and \(0\le s<\frac{1}{12}\), let \(\{ {{X}_{n}},n\ge 1 \}\) be a sequence of ANA random variables with \({{\rho }^{-}} ( \mathrm{N} )\le s\), \(E{{X}_{n}}=0\), and \(E{{\vert {{X}_{n}} \vert }^{2}}<\infty\). Then for all \(n\ge1\), there exists a positive constant \(C=C ( 2,\mathrm{N},s )\) such that
$$ E \Biggl( \max_{1\le j\le n} {{\Biggl\vert \sum _{i=1}^{j}{{{X}_{i}}} \Biggr\vert }^{2}} \Biggr)\le C\sum_{i=1}^{n}{EX_{i}^{2}}. $$
(3.1)

Lemma 3.3

(Adler and Rosalsky [17]; Adler et al. [18])

Suppose that \(\{ {{X}_{ni}},i\ge1,n\ge1 \}\) is an array of random variables stochastically dominated by a random variable X. Then, for all \(q>0\) and \(x>0\),
$$\begin{aligned}& E{{\vert {{X}_{ni}} \vert }^{q}}I \bigl( \vert {{X}_{ni}} \vert \le x \bigr)\le C \bigl( E{{\vert X \vert }^{q}}I \bigl( \vert X \vert \le x \bigr)+{{x}^{q}}P \bigl( \vert X \vert >x \bigr) \bigr), \end{aligned}$$
(3.2)
$$\begin{aligned}& E{{\vert {{X}_{ni}} \vert }^{q}}I \bigl( \vert {{X}_{ni}} \vert >x \bigr)\le CE{{\vert X \vert }^{q}}I \bigl( \vert X \vert >x \bigr). \end{aligned}$$
(3.3)

Lemma 3.4

(Wu et al. [19])

Let \(\{ {{a}_{ni}},i\ge1,n\ge1 \}\) be an array of real constants satisfying (1.1) for some \(\alpha>0\), and X be a random variable. Let \({{b}_{n}}={{n}^{1/\alpha}}{{ ( \log n )}^{1/\gamma}}\) for some \(\gamma>0\). If \(p>\max \{ \alpha,\gamma \}\), then
$$ \sum_{n=1}^{\infty}{ \frac{1}{nb_{n}^{p}}\sum_{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{p}}I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} \bigr)}}\leq \left \{ \textstyle\begin{array}{l@{\quad}l} CE|X|^{\alpha}&\textit{for } \alpha>\gamma, \\ CE|X|^{\alpha}\log(1+|X|)&\textit{for } \alpha=\gamma, \\ CE|X|^{\gamma}&\textit{for } \alpha< \gamma. \end{array}\displaystyle \right . $$
(3.4)

Proof of Theorem 2.1

Without loss of generality, assume that \({{a}_{ni}}\ge0\) (otherwise, we shall use \(a_{ni}^{+}\) and \(a_{ni}^{-}\) instead of \({{a}_{ni}}\), and note that \({{a}_{ni}}=a_{ni}^{+}-a_{ni}^{-}\)). For fixed \(n\ge1\), define
$$\begin{aligned}& {{Y}_{ni}}=-{{b}_{n}}I ( {{a}_{ni}} {{X}_{ni}}< -{{b}_{n}} )+{{a}_{ni}} {{X}_{ni}}I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert \le {{b}_{n}} \bigr)\text{+} {{b}_{n}}I ( {{a}_{ni}} {{X}_{ni}}>{{b}_{n}} ),\quad i\ge1; \\& {{Z}_{ni}}={{a}_{ni}} {{X}_{ni}}-{{Y}_{ni}}= ( {{a}_{ni}} {{X}_{ni}}+{{b}_{n}} )I ( {{a}_{ni}} {{X}_{ni}}< -{{b}_{n}} )+ ( {{a}_{ni}} {{X}_{ni}}-{{b}_{n}} )I ( {{a}_{ni}} {{X}_{ni}}>{{b}_{n}} ); \\& A=\bigcap_{i=1}^{n}{ ( {{Y}_{ni}}={{a}_{ni}} {{X}_{ni}} )}, \qquad B=\bar{A}=\bigcup_{i=1}^{n}{ ( {{Y}_{ni}}\ne {{a}_{ni}} {{X}_{ni}} )}=\bigcup _{i=1}^{n}{ \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)}; \\& {{E}_{ni}}= \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >\varepsilon{{b}_{n}} \Biggr). \end{aligned}$$
It is easy to check that for all \(\varepsilon>0\),
$$ {{E}_{ni}}={{E}_{ni}}A\cup{{E}_{ni}}B\subset \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{Y}_{ni}}} \Biggr\vert > \varepsilon{{b}_{n}} \Biggr)\cup \Biggl( \bigcup _{i=1}^{n}{ \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)} \Biggr), $$
which implies that
$$\begin{aligned} P ( {{E}_{ni}} ) \le& P \Biggl(\max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{Y}_{ni}}} \Biggr\vert >\varepsilon {{b}_{n}} \Biggr)+P \Biggl( \bigcup _{i=1}^{n}{ \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)} \Biggr) \\ \le& P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{ ( {{Y}_{ni}}-E{{Y}_{ni}} )} \Biggr\vert >\varepsilon{{b}_{n}}-\max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{E{{Y}_{ni}}} \Biggr\vert \Biggr) \\ &{}+ \sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)}. \end{aligned}$$
(3.5)
First, we shall show that
$$ \frac{1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{E{{Y}_{ni}}} \Biggr\vert \to0\quad \text{as }n\to \infty. $$
(3.6)
For \(0<\alpha\le1\), it follows from (3.2) of Lemma 3.3, the Markov inequality, and \(E{{\vert X \vert }^{\alpha}}<\infty\) that
$$\begin{aligned} \frac{1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{E{{Y}_{ni}}} \Biggr\vert \le& C\frac{1}{{{b}_{n}}}\sum_{i=1}^{n}{ \vert E{{Y}_{ni}} \vert } \\ \le& C\frac{1}{{{b}_{n}}}\sum_{i=1}^{n}{E \vert {{a}_{ni}} {{X}_{ni}} \vert I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert \le{{b}_{n}} \bigr)}+C\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)} \\ \le& C\frac{1}{{{b}_{n}}}\sum_{i=1}^{n}{ \bigl( E\vert {{a}_{ni}}X \vert I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} \bigr)+{{b}_{n}}P \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr) \bigr)} \\ &{}+ C\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)} \\ \le& C\frac{1}{b_{n}^{\alpha}}\sum_{i=1}^{n}{a_{ni}^{\alpha }E{{ \vert X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert \le {{b}_{n}} \bigr)}+C\frac{1}{b_{n}^{\alpha}}\sum _{i=1}^{n}{a_{ni}^{\alpha}E{{ \vert X \vert }^{\alpha}}} \\ \le& C{{ ( \log n )}^{-\alpha/\gamma}}E{{ \vert X \vert }^{\alpha}}\to0 \quad \text{as }n\to\infty. \end{aligned}$$
(3.7)

From the definition of \({{Z}_{ni}}={{a}_{ni}}{{X}_{ni}}-{{Y}_{ni}}\) we know that when \({{a}_{ni}}{{X}_{ni}}>{{b}_{n}}\), \(0<{{Z}_{ni}}={{a}_{ni}}{{X}_{ni}}-{{b}_{n}}<{{a}_{ni}}{{X}_{ni}}\), and when \({{a}_{ni}}{{X}_{ni}}<-{{b}_{n}}\), \({{a}_{ni}}{{X}_{ni}}<{{Z}_{ni}}={{a}_{ni}}{{X}_{ni}}+{{b}_{n}}<0\). Hence, \(\vert {{Z}_{ni}} \vert <\vert {{a}_{ni}}{{X}_{ni}} \vert I ( \vert {{a}_{ni}}{{X}_{ni}} \vert >{{b}_{n}} )\).

For \(1<\alpha\le2\), it follows from \(E{{X}_{ni}}=0\), (3.3) of Lemma 3.3, and \(E{{\vert X \vert }^{\alpha}}<\infty\) again that
$$\begin{aligned} \frac{1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{E{{Y}_{ni}}} \Biggr\vert =& \frac{1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{E{{Z}_{ni}}} \Biggr\vert \\ \le& C\frac{1}{{{b}_{n}}}\sum_{i=1}^{n}{E \vert {{Z}_{ni}} \vert } \\ \le& C\frac{1}{{{b}_{n}}}\sum_{i=1}^{n}{E \vert {{a}_{ni}} {{X}_{ni}} \vert I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)} \\ \le& C\frac{1}{{{b}_{n}}}\sum_{i=1}^{n}{E \vert {{a}_{ni}}X \vert I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)} \\ \le& C\frac{1}{b_{n}^{\alpha}}\sum_{i=1}^{n}{a_{ni}^{\alpha }E{{ \vert X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)} \\ \le& C{{ ( \log n )}^{-\alpha/\gamma}}E{{ \vert X \vert }^{\alpha}}\to0 \quad \text{as }n\to\infty. \end{aligned}$$
(3.8)
By (3.7) and (3.8) we immediately obtain (3.6). Hence, for n large enough,
$$ P ( {{E}_{n}} )\le P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{ ( {{Y}_{ni}}-E{{Y}_{ni}} )} \Biggr\vert >\frac{\varepsilon{{b}_{n}}}{2} \Biggr)+\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)}. $$
(3.9)
To prove (2.2), it suffices to show that
$$\begin{aligned}& I\triangleq\sum_{n=1}^{\infty}{ \frac{1}{n}P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{ ( {{Y}_{ni}}-E{{Y}_{ni}} )} \Biggr\vert >\frac{\varepsilon{{b}_{n}}}{2} \Biggr)}< \infty, \end{aligned}$$
(3.10)
$$\begin{aligned}& J\triangleq\sum_{n=1}^{\infty}{ \frac{1}{n}\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)}}< \infty. \end{aligned}$$
(3.11)
By Lemma 3.1 it obviously follows that \(\{ {{Y}_{ni}}-E{{Y}_{ni}},i\ge1,n\ge1 \}\) is still an array of rowwise ANA random variables. Hence, it follows from the Markov inequality and Lemma 3.2 that
$$\begin{aligned} I \le& C\sum_{n=1}^{\infty}{ \frac{1}{n}\frac {1}{b_{n}^{2}}E \Biggl(\max_{1\le j\le n} {{\Biggl\vert \sum_{i=1}^{j}{ ( {{Y}_{ni}}-E{{Y}_{ni}} )} \Biggr\vert }^{2}} \Biggr)} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \frac{1}{b_{n}^{2}}\sum_{i=1}^{n}{E{{\vert {{Y}_{ni}}-E{{Y}_{ni}} \vert }^{2}}}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \frac{1}{b_{n}^{2}}\sum_{i=1}^{n}{EY_{ni}^{2}}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \frac{1}{b_{n}^{2}}\sum_{i=1}^{n}{E{{\vert {{a}_{ni}} {{X}_{ni}} \vert }^{2}}I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert \le{{b}_{n}} \bigr)}}+C\sum_{n=1}^{\infty}{\frac{1}{n} \sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} \bigr)}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \frac{1}{b_{n}^{2}}\sum_{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} \bigr)}} \\ &{}+C\sum _{n=1}^{\infty }{\frac{1}{n}\frac{1}{b_{n}^{\alpha}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)}} \\ \triangleq& {{I}_{1}}+{{I}_{2}}. \end{aligned}$$
(3.12)
From Lemma 3.4 (for \(p=2\)) and (2.1) we obtain that \({{I}_{1}}<\infty \). Hence, it follows from (3.2) of Lemma 3.3 and from (1.1) that
$$\begin{aligned} {{I}_{2}} =&C\sum_{n=1}^{\infty}{ \frac{1}{{{n}^{2}}}{{ ( \log n )}^{-\alpha/\gamma}}\sum_{i=1}^{n}{E{{ \vert {{a}_{ni}}X \vert }^{\alpha}}I \bigl( {{\vert {{a}_{ni}}X \vert }^{\alpha}}>n{{ ( \log n )}^{\alpha/\gamma}} \bigr)}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{{{n}^{2}}}{{ ( \log n )}^{-\alpha/\gamma}}\sum_{i=1}^{n}{E{{ \vert {{a}_{ni}}X \vert }^{\alpha}}I \biggl( {{\vert X \vert }^{\alpha}}>\frac{n{{ ( \log n )}^{\alpha/\gamma}}}{\sum_{i=1}^{n}{{{\vert {{a}_{ni}} \vert }^{\alpha}}}} \biggr)}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{{{n}^{2}}}{{ ( \log n )}^{-\alpha/\gamma}}\sum_{i=1}^{n}{E{{ \vert {{a}_{ni}}X \vert }^{\alpha}}I \bigl( \vert X \vert >{{ ( \log n )}^{1/\gamma}} \bigr)}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n}{{ ( \log n )}^{-\alpha/\gamma}}E{{ \vert X \vert }^{\alpha}}I \bigl( \vert X \vert >{{ ( \log n )}^{1/\gamma}} \bigr)} \\ =& C\sum_{n=1}^{\infty}{\frac{1}{n}{{ ( \log n )}^{-\alpha/\gamma}}\sum_{k=n}^{\infty}{E{{ \vert X \vert }^{\alpha}}I \bigl( {{ ( \log k )}^{1/\gamma}}< \vert X \vert < {{ \bigl( \log ( k+1 ) \bigr)}^{1/\gamma}} \bigr)}} \\ =& C\sum_{k=1}^{\infty}{E{{ \vert X \vert }^{\alpha}}I \bigl( {{ ( \log k )}^{1/\gamma}}< \vert X \vert < {{ \bigl( \log ( k+1 ) \bigr)}^{1/\gamma}} \bigr)\sum _{n=1}^{k}{\frac{1}{n}{{ ( \log n )}^{-\alpha/\gamma}}}}. \end{aligned}$$
Note that
$$ \sum_{n=1}^{k}{\frac{1}{n}{{ ( \log n )}^{-\alpha /\gamma}}}= \left \{ \textstyle\begin{array}{l@{\quad}l} C&\mbox{for } \alpha>\gamma, \\ C\log\log k&\mbox{for } \alpha=\gamma, \\ C{{ ( \log k )}^{1-\alpha/\gamma}}&\mbox{for } \alpha< \gamma. \end{array}\displaystyle \right . $$
Therefore, we obtain that
$$ {{I}_{2}}\le \left \{ \textstyle\begin{array}{l@{\quad}l} CE{{\vert X \vert }^{\alpha}}&\mbox{for } \alpha>\gamma, \\ CE{{\vert X \vert }^{\alpha}}\log ( 1+\vert X \vert )&\mbox{for } \alpha=\gamma, \\ CE{{\vert X \vert }^{\gamma}}&\mbox{for } \alpha< \gamma. \end{array}\displaystyle \right . $$
Under the conditions of Theorem 2.1 it follows that \({{I}_{2}}<\infty \).
By (3.3) of Lemma 3.3 and the proof of \({{I}_{2}}<\infty\),
$$ J\le\sum_{n=1}^{\infty}{ \frac{1}{n}\frac{1}{b_{n}^{\alpha }}\sum_{i=1}^{n}{E{{ \vert {{a}_{ni}}X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)}}< \infty. $$
(3.13)
The proof of Theorem 2.1 is completed. □

Proof of Theorem 2.2

For all \(\varepsilon>0\), we have
$$\begin{aligned}& \sum_{n=1}^{\infty}{ \frac{1}{n}}E \Biggl( \frac {1}{{{b}_{n}}}\max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert -\varepsilon \Biggr)_{+}^{q} \\& \quad = \sum_{n=1}^{\infty}{\frac{1}{n} \int_{0}^{\infty}{P \Biggl( \frac{1}{{{b}_{n}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert -\varepsilon >{{t}^{1/q}} \Biggr) \,dt}} \\& \quad = \sum_{n=1}^{\infty}{\frac{1}{n} \int_{0}^{1}{P \Biggl( \frac {1}{{{b}_{n}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >\varepsilon+{{t}^{1/q}} \Biggr) \,dt}} \\& \qquad {} + \sum_{n=1}^{\infty}{ \frac{1}{n} \int_{1}^{\infty}{P \Biggl( \frac{1}{{{b}_{n}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >\varepsilon +{{t}^{1/q}} \Biggr) \,dt}} \\& \quad \le \sum_{n=1}^{\infty}{\frac{1}{n}P \Biggl(\max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >\varepsilon{{b}_{n}} \Biggr)} \\& \qquad {} + \sum_{n=1}^{\infty}{ \frac{1}{n} \int_{1}^{\infty}{P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >{{b}_{n}} {{t}^{1/q}} \Biggr)\,dt}} \\& \quad \triangleq {{K}_{1}}+{{K}_{2}}. \end{aligned}$$
(3.14)
To prove (2.3), it suffices to prove \({{K}_{1}}<\infty\) and \({{K}_{2}}<\infty\). By Theorem 2.1 we obtain that \({{K}_{1}}<\infty\). By applying a similar notation and the methods of Theorem 2.1, for fixed \(n\ge1\), \(i\ge1\), and all \(t\ge1\), define
$$\begin{aligned}& Y_{ni}'=-{{b}_{n}} {{t}^{1/q}}I \bigl( {{a}_{ni}} {{X}_{ni}}< -{{b}_{n}} {{t}^{1/q}} \bigr)+{{a}_{ni}} {{X}_{ni}}I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert \le {{b}_{n}} {{t}^{1/q}} \bigr)\text{+} {{b}_{n}} {{t}^{1/q}}I \bigl( {{a}_{ni}} {{X}_{ni}}>{{b}_{n}} {{t}^{1/q}} \bigr); \\& Z_{ni}'={{a}_{ni}} {{X}_{ni}}-Y_{ni}' \\& \hphantom{Z_{ni}'}= \bigl( {{a}_{ni}} {{X}_{ni}}+{{b}_{n}} {{t}^{1/q}} \bigr)I \bigl( {{a}_{ni}} {{X}_{ni}}< -{{b}_{n}} {{t}^{1/q}} \bigr)+ \bigl( {{a}_{ni}} {{X}_{ni}}-{{b}_{n}} {{t}^{1/q}} \bigr)I \bigl( {{a}_{ni}} {{X}_{ni}}>{{b}_{n}} {{t}^{1/q}} \bigr); \\& {{A}'}=\bigcap_{i=1}^{n}{ \bigl( Y_{ni}'={{a}_{ni}} {{X}_{ni}} \bigr)}, \qquad {{B}'}={{\bar {A}}'}=\bigcup _{i=1}^{n}{ \bigl( Y_{ni}'\ne {{a}_{ni}} {{X}_{ni}} \bigr)}=\bigcup _{i=1}^{n}{ \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}; \\& E_{ni}'= \Biggl( \max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{{{a}_{ni}} {{X}_{ni}}} \Biggr\vert >{{b}_{n}} {{t}^{1/q}} \Biggr). \end{aligned}$$
It is easy to check that for all \(\varepsilon>0\),
$$\begin{aligned} P \bigl( E_{ni}' \bigr) \le& P \Biggl( \max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{Y_{ni}'} \Biggr\vert >{{b}_{n}} {{t}^{1/q}} \Biggr)+P \Biggl( \bigcup _{i=1}^{n}{ \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)} \Biggr) \\ \le& P \Biggl(\max_{1\le j\le n} \Biggl\vert \sum _{i=1}^{j}{ \bigl( Y_{ni}'-EY_{ni}' \bigr)} \Biggr\vert >{{b}_{n}} {{t}^{1/q}}-\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{EY_{ni}'} \Biggr\vert \Biggr) \\ &{}+ \sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}. \end{aligned}$$
(3.15)
First, we shall show that
$$ \max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{EY_{ni}'} \Biggr\vert \to0 \quad \text{as }n\to \infty. $$
(3.16)
Similarly to the proofs of (3.7) and (3.8), for \(0<\alpha\le1\), it follows from (3.2) of Lemma 3.3, the Markov inequality, and \(E{{\vert X \vert }^{\alpha}}<\infty\) that
$$\begin{aligned} \max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{EY_{ni}'} \Biggr\vert \le& C\max_{t\ge 1} \frac{1}{{{b}_{n}}{{t}^{1/q}}}\sum _{i=1}^{n}{\bigl\vert EY_{ni}' \bigr\vert } \\ \le& C\max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\sum _{i=1}^{n}{E\vert {{a}_{ni}} {{X}_{ni}} \vert I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert \le{{b}_{n}} {{t}^{1/q}} \bigr)} \\ & {} + C\max_{t\ge1} \sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)} \\ \le& C\max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\sum _{i=1}^{n}{E\vert {{a}_{ni}}X \vert I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} {{t}^{1/q}} \bigr)} \\ &{} + C\max_{t\ge1} \sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} {{t}^{1/q}} \bigr)} \\ \le& C\max_{t\ge1} \frac{1}{b_{n}^{\alpha }{{t}^{\alpha/q}}}\sum _{i=1}^{n}{a_{ni}^{\alpha}E{{ \vert X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert \le {{b}_{n}} {{t}^{1/q}} \bigr)} \\ &{} + C\max_{t\ge1} \frac{1}{b_{n}^{\alpha }{{t}^{\alpha/q}}}\sum _{i=1}^{n}{a_{ni}^{\alpha}E{{ \vert X \vert }^{\alpha}}} \\ \le& C{{ ( \log n )}^{-\alpha/\gamma}}E{{ \vert X \vert }^{\alpha}}\to0 \quad \text{as }n\to\infty. \end{aligned}$$
(3.17)
Noting that \(\vert Z_{ni}' \vert <\vert {{a}_{ni}}{{X}_{ni}} \vert I ( \vert {{a}_{ni}}{{X}_{ni}} \vert >{{b}_{n}}{{t}^{1/q}} )\), for \(1<\alpha\le2\), it follows from \(E{{X}_{n}}=0\), (3.3) of Lemma 3.3, and \(E{{\vert X \vert }^{\alpha}}<\infty\) again that
$$\begin{aligned} \max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{EY_{ni}'} \Biggr\vert =& \max_{t\ge1} \frac{1}{{{b}_{n}}{{t}^{1/q}}} \max _{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{EZ_{ni}'} \Biggr\vert \\ \le& C\max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\sum _{i=1}^{n}{E\bigl\vert Z_{ni}' \bigr\vert } \\ \le& C\max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\sum _{i=1}^{n}{E\vert {{a}_{ni}} {{X}_{ni}} \vert I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)} \\ \le& C\max_{t\ge1} \frac {1}{{{b}_{n}}{{t}^{1/q}}}\sum _{i=1}^{n}{E\vert {{a}_{ni}}X \vert I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} {{t}^{1/q}} \bigr)} \\ \le& C\max_{t\ge1} \frac{1}{b_{n}^{\alpha }{{t}^{\alpha/q}}}\sum _{i=1}^{n}{a_{ni}^{\alpha}E{{ \vert X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} {{t}^{1/q}} \bigr)} \\ \le& C{{ ( \log n )}^{-\alpha/\gamma}}E{{ \vert X \vert }^{\alpha}}\to0 \quad \text{as }n\to\infty. \end{aligned}$$
(3.18)
To prove \({{K}_{2}}<\infty\), it suffices to show that
$$\begin{aligned}& {{K}_{21}}\triangleq\sum_{n=1}^{\infty}{ \frac{1}{n} \int _{1}^{\infty}{P \Biggl(\max_{1\le j\le n} \Biggl\vert \sum_{i=1}^{j}{ \bigl( Y_{ni}'-EY_{ni}' \bigr)} \Biggr\vert >\frac{{{b}_{n}}{{t}^{1/q}}}{2} \Biggr)\,dt}}< \infty, \end{aligned}$$
(3.19)
$$\begin{aligned}& {{K}_{22}}\triangleq\sum_{n=1}^{\infty}{ \frac{1}{n} \int _{1}^{\infty}{\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}}< \infty. \end{aligned}$$
(3.20)
Hence, it follows from the Markov inequality and Lemma 3.2 that
$$\begin{aligned} {{K}_{21}} \le& C\sum_{n=1}^{\infty}{ \frac{1}{n} \int _{1}^{\infty}{\frac{1}{b_{n}^{2}{{t}^{2/q}}}E \Biggl(\max _{1\le j\le n} {{\Biggl\vert \sum_{i=1}^{j}{ \bigl( Y_{ni}'-EY_{ni}' \bigr)} \Biggr\vert }^{2}} \Biggr)\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \int_{1}^{\infty}{\frac {1}{b_{n}^{2}{{t}^{2/q}}}\sum _{i=1}^{n}{E{{\bigl\vert Y_{ni}'-EY_{ni}' \bigr\vert }^{2}}}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \int_{1}^{\infty}{\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}} \\ &{}+ C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \int_{1}^{\infty }{\frac{1}{{{t}^{2/q}}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}} {{X}_{ni}} \vert }^{2}}I \bigl( \vert {{a}_{ni}} {{X}_{ni}} \vert \le{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \int_{1}^{\infty}{\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}} \\ &{}+ C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \int_{1}^{\infty }{\frac{1}{{{t}^{2/q}}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} \bigr)}\,dt}} \\ &{}+ C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \int_{1}^{\infty }{\frac{1}{{{t}^{2/q}}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}< \vert {{a}_{ni}}X \vert \le {{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}}. \end{aligned}$$
(3.21)
For \(0< q<\alpha\) and \(\sum_{i=1}^{n}{{{\vert {{a}_{ni}} \vert }^{\alpha}}}=O ( n )\), similarly as in the proof of \({{I}_{2}}<\infty\), we obtain that
$$\begin{aligned} K_{22} \leq& \sum_{n=1}^{\infty}{ \frac{1}{n} \int_{1}^{\infty }{\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \int_{0}^{\infty}{\sum_{i=1}^{n}{P \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \int_{0}^{\infty}{\sum_{i=1}^{n}{P \biggl( \frac{{{\vert {{a}_{ni}}X \vert }^{q}}}{b_{n}^{q}}>t \biggr)}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n}\sum _{i=1}^{n}{\frac{E{{\vert {{a}_{ni}}X \vert }^{q}}}{b_{n}^{q}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{\alpha}} \sum_{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)}} \\ < & \infty\quad (\text{see the proof of } {{I}_{2}}< \infty). \end{aligned}$$
For \(0< q<\alpha\le2\), it follows from Lemma 3.4 and (2.1) that
$$\begin{aligned} \nabla_{1} \triangleq& \sum_{n=1}^{\infty}{ \frac {1}{nb_{n}^{2}} \int_{1}^{\infty}{\frac{1}{{{t}^{2/q}}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} \bigr)}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{n} \frac{1}{b_{n}^{2}}\sum_{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( \vert {{a}_{ni}}X \vert \le{{b}_{n}} \bigr)}} < \infty. \end{aligned}$$
Taking \(t={{x}^{q}}\), by the Markov inequality from (3.2) of Lemma 3.3 it follows that
$$\begin{aligned} \nabla_{2} \triangleq& \sum_{n=1}^{\infty}{ \frac {1}{nb_{n}^{2}} \int_{1}^{\infty}{\frac{1}{{{t}^{2/q}}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}< \vert {{a}_{ni}}X \vert \le{{b}_{n}} {{t}^{1/q}} \bigr)}\,dt}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \int _{1}^{\infty}{{{x}^{q-3}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}< \vert {{a}_{ni}}X \vert \le{{b}_{n}}x \bigr)}\, dx}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \sum_{m=1}^{\infty}{ \int_{m}^{m+1}{{{x}^{q-3}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}< \vert {{a}_{ni}}X \vert \le{{b}_{n}}x \bigr)}\, dx}}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \sum_{m=1}^{\infty}{{{m}^{q-3}}\sum _{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}< \vert {{a}_{ni}}X \vert \le {{b}_{n}} ( m+1 ) \bigr)}}} \\ =& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}}\sum _{i=1}^{n}{\sum_{m=1}^{\infty}{ \sum_{s=1}^{m}{{{m}^{q-3}}E{{ \vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}s< \vert {{a}_{ni}}X \vert \le{{b}_{n}} ( s+1 ) \bigr)}}}} \\ =& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}}\sum _{i=1}^{n}{\sum_{s=1}^{\infty}{E{{ \vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}s< \vert {{a}_{ni}}X \vert \le{{b}_{n}} ( s+1 ) \bigr)\sum_{m=s}^{\infty}{{{m}^{q-2}}}}}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{2}} \sum_{i=1}^{n}{\sum _{s=1}^{\infty}{E{{ \vert {{a}_{ni}}X \vert }^{2}}I \bigl( {{b}_{n}}s< \vert {{a}_{ni}}X \vert \le{{b}_{n}} ( s+1 ) \bigr){{s}^{q-2}}}}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{q}} \sum_{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{q}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)}} \\ \le& C\sum_{n=1}^{\infty}{\frac{1}{nb_{n}^{\alpha}} \sum_{i=1}^{n}{E{{\vert {{a}_{ni}}X \vert }^{\alpha}}I \bigl( \vert {{a}_{ni}}X \vert >{{b}_{n}} \bigr)}} \\ < & \infty \quad (\text{see the proof of } {{I}_{2}}< \infty). \end{aligned}$$
The proof of Theorem 2.2 is completed. □

Declarations

Acknowledgements

The authors are most grateful to the Editor prof. A Volodin and the anonymous referees for carefully reading the paper and for offering valuable suggestions, which greatly improved this paper. This paper is supported by the National Nature Science Foundation of China (11526085, 71501025, 11401127), the Humanities and Social Sciences Foundation for the Youth Scholars of Ministry of Education of China (15YJCZH066), the Guangxi Provincial Natural Science Foundation of China (2014GXNSFBA118006, 2014GXNSFCA118015), the Scientific Research Fund of Guangxi Provincial Education Department (2013YB104), the Construct Program of the Key Discipline in Hunan Province (No. [2011]76), and the Science Foundation of Hengyang Normal University (14B30).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics and Statistics, Hengyang Normal University
(2)
College of Science, Guilin University of Technology
(3)
School of Mathematical Science, University of Electronic Science and Technology of China

References

  1. Sung, SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 52, 447-454 (2011) View ArticleMATHGoogle Scholar
  2. Zhou, XC, Tan, CC, Lin, JG: On the strong laws for weighted sums of \({{\rho}^{*}}\)-mixing random variables. J. Inequal. Appl. 2011, Article ID 157816 (2011) View ArticleMathSciNetGoogle Scholar
  3. Sung, SH: On the strong convergence for weighted sums of \({{\rho }^{*}}\)-mixing random variables. Stat. Pap. 54, 773-781 (2013) View ArticleMATHGoogle Scholar
  4. Zhang, LX, Wang, XY: Convergence rates in the strong laws of asymptotically negatively associated random fields. Appl. Math. J. Chin. Univ. Ser. B 14(4), 406-416 (1999) View ArticleMATHGoogle Scholar
  5. Zhang, LX: A functional central limit theorem for asymptotically negatively dependent random fields. Acta Math. Hung. 86(3), 237-259 (2000) View ArticleMATHGoogle Scholar
  6. Zhang, LX: Central limit theorems for asymptotically negatively associated random fields. Acta Math. Sin. Engl. Ser. 16(4), 691-710 (2000) View ArticleMathSciNetMATHGoogle Scholar
  7. Wang, JF, Lu, FB: Inequalities of maximum partial sums and weak convergence for a class of weak dependent random variables. Acta Math. Sin. Engl. Ser. 22(3), 693-700 (2006) View ArticleMathSciNetMATHGoogle Scholar
  8. Wang, JF, Zhang, LX: A Berry-Esseen theorem and a law of the iterated logarithm for asymptotically negatively associated sequences. Acta Math. Sin. Engl. Ser. 23(1), 127-136 (2007) View ArticleMathSciNetMATHGoogle Scholar
  9. Liu, XD, Liu, JX: Moments of the maximum of normed partial sums of \({{\rho }^{-}}\)-mixing random variables. Appl. Math. J. Chin. Univ. Ser. B 24(3), 355-360 (2009) View ArticleMATHGoogle Scholar
  10. Yuan, DM, Wu, XS: Limiting behavior of the maximum of the partial sum for asymptotically negatively associated random variables under residual Cesàro alpha-integrability assumption. J. Stat. Plan. Inference 140, 2395-2402 (2010) View ArticleMathSciNetMATHGoogle Scholar
  11. Budsaba, K, Chen, PY, Volodin, A: Limiting behavior of moving average processes based on a sequence of \({{\rho}^{-}}\)-mixing and NA random variables. Lobachevskii J. Math. 26, 17-25 (2007) MathSciNetMATHGoogle Scholar
  12. Tan, XL, Zhang, Y, Zhang, Y: An almost sure central limit theorem of products of partial sums for \({{\rho}^{-}}\)-mixing sequences. J. Inequal. Appl. 2012, 51 (2012). doi:https://doi.org/10.1186/1029-242X-2012-51 View ArticleMathSciNetGoogle Scholar
  13. Ko, MH: The Hájek-Rényi inequality and strong law of large numbers for ANA random variables. J. Inequal. Appl. 2014, 521 (2014). doi:https://doi.org/10.1186/1029-242X-2014-521 View ArticleGoogle Scholar
  14. Zhang, Y: Complete moment convergence for moving average process generated by \({{\rho}^{-}}\)-mixing random variables. J. Inequal. Appl. 2015, 245 (2015). doi:https://doi.org/10.1186/s13660-015-0766-5 View ArticleGoogle Scholar
  15. Hsu, PL, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33(2), 25-31 (1947) View ArticleMathSciNetMATHGoogle Scholar
  16. Chow, YS: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177-201 (1988) MATHGoogle Scholar
  17. Adler, A, Rosalsky, A: Some general strong laws for weighted sums of stochastically dominated random variables. Stoch. Anal. Appl. 5, 1-16 (1987) View ArticleMathSciNetMATHGoogle Scholar
  18. Adler, A, Rosalsky, A, Taylor, RL: Strong laws of large numbers for weighted sums of random variables in normed linear spaces. Int. J. Math. Math. Sci. 12, 507-530 (1989) View ArticleMathSciNetMATHGoogle Scholar
  19. Wu, YF, Sung, SH, Volodin, A: A note on the rates of convergence for weighted sums of \({{\rho}^{*}}\)-mixing random variables. Lith. Math. J. 54(2), 220-228 (2014) View ArticleMathSciNetMATHGoogle Scholar

Copyright

© Huang et al. 2016