• Research
• Open Access

# A note on the strong limit theorem for weighted sums of sequences of negatively dependent random variables

Journal of Inequalities and Applications20122012:233

https://doi.org/10.1186/1029-242X-2012-233

• Received: 15 February 2012
• Accepted: 10 August 2012
• Published:

## Abstract

In this paper, the strong limit theorem for weighted sums of sequences of negatively dependent random variables is further studied. As an application, the complete convergence theorem for sequences of negatively dependent random variables is obtained. Our results partly generalize and improve the corresponding results of Cai (Metrika 68:323-331, 2008) and Wang et al. (Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. a Mat., 2011, doi:10.1007/s13398-011-0048-0) to negatively dependent random variables under mild moment conditions.

MSC:60F15.

## Keywords

• negatively dependent random variables
• complete convergence
• weighted sums

## 1 Introduction

Definition 1.1 The random variables ${X}_{1},{X}_{2},\dots ,{X}_{n}$ are said to be negatively dependent (ND) if
$P\left({X}_{1}\le {x}_{1},{X}_{2}\le {x}_{2},\dots ,{X}_{n}\le {x}_{n}\right)\le \prod _{j=1}^{n}P\left({X}_{j}\le {x}_{j}\right);$
and
$P\left({X}_{1}>{x}_{1},{X}_{2}>{x}_{2},\dots ,{X}_{n}>{x}_{n}\right)\le \prod _{j=1}^{n}P\left({X}_{j}>{x}_{j}\right),$

for all ${x}_{1},{x}_{2},\dots ,{x}_{n}\in \mathbb{R}$. An infinite sequence of random variables $\left\{{X}_{i};i\ge 1\right\}$ is said to be ND if every finite subset ${X}_{1},{X}_{2},\dots ,{X}_{n}$ is ND.

An array of random variables $\left\{{X}_{ni};i\ge 1,n\ge 1\right\}$ is called rowwise ND random variables if for every $n\ge 1$, $\left\{{X}_{ni};i\ge 1\right\}$ is a sequence of ND random variables.

Definition 1.2 The random variables ${X}_{1},{X}_{2},\dots ,{X}_{n}$, $n\ge 2$ are said to be negatively associated (NA) if for every pair of disjoint subsets ${A}_{1}$ and ${A}_{2}$ of $\left\{1,2,\dots ,n\right\}$,
$cov\left({f}_{1}\left({X}_{i};i\in {A}_{1}\right),{f}_{2}\left({X}_{j};j\in {A}_{2}\right)\right)\le 0,$

whenever ${f}_{1}$ and ${f}_{2}$ are increasing for every variable (or decreasing for every variable), such that the covariance exists. An infinite family $\left\{{X}_{i};i\ge 1\right\}$ of random variables is said to be NA if every finite subfamily is NA.

The concept of ND random variables was introduced by Ebrahimi and Ghosh, and the concept of NA random variables was introduced by Joag-Dev and Proschan . Obviously, independent random variables are ND. Joag-Dev and Proschan  pointed out that NA random variables are ND. They also presented an example in which $X=\left({X}_{1},{X}_{2},{X}_{3},{X}_{4}\right)$ possesses ND, but does not possess NA. So, we can see that ND is much weaker than NA. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A large number of limit theorems for ND random variables have been established by many authors. We can refer to etc. Hence, extending the limit properties of independent or NA random variables to the case of ND random variables is highly desirable and of considerable significance in theory and application.

As Bai and Cheng  remarked, many useful linear statistics based on a random sample are weighted sums of independent identically distributed (i.i.d.) random variables. Examples include least-squares estimators, nonparametric regression function estimators, jackknife estimates and so on. In this respect, studies of strong laws for these weighted sums have demonstrated significant progress in probability theory with applications in mathematical statistics; many authors studied the strong laws for weighted sums of random variables. In the case of independence, Bai and Cheng  proved the following strong laws of large numbers for weighted sums.

Theorem 1.1 (Bai and Cheng )

Let $\left\{X,{X}_{n};n\ge 1\right\}$ be a sequence of i.i.d. random variables with $E{X}_{n}=0$. Suppose that $0<\alpha$, $\beta <\mathrm{\infty }$, $1\le p<2$ and $1/p=1/\alpha +1/\beta$, and let $\left\{{a}_{ni};1\le i\le n,n\ge 1\right\}$ be an array of real constants such that
${A}_{\alpha }=\underset{n\to \mathrm{\infty }}{lim sup}{A}_{\alpha ,n}<\mathrm{\infty },\phantom{\rule{1em}{0ex}}{A}_{\alpha ,n}^{\alpha }={n}^{-1}\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }.$
(1.1)
If $E|{X}_{n}{|}^{\beta }<\mathrm{\infty }$, then
$\underset{n\to \mathrm{\infty }}{lim}{n}^{-1/p}\left(\sum _{i=1}^{n}{a}_{ni}{X}_{i}\right)=0,\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(1.2)

Theorem 1.2 (Bai and Cheng )

Let $\left\{X,{X}_{n};n\ge 1\right\}$ be a sequence of i.i.d. random variables and
(1.3)
and let $\left\{{a}_{ni};1\le i\le n,n\ge 1\right\}$ be an array of real constants such that (1.1) satisfies for $\alpha \in \left(0,2\right)$. Then for $0<\alpha \le 1$ and ${b}_{n}={n}^{1/\alpha }{\left(logn\right)}^{1/\gamma }$,
$\sum _{i=1}^{n}{a}_{ni}{X}_{i}/{b}_{n}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(1.4)
Moreover, for $1<\alpha <2$ and ${b}_{n}={n}^{1/\alpha }{\left(logn\right)}^{1/\gamma +\delta }$, and $E{X}_{n}=0$,
$\sum _{i=1}^{n}{a}_{ni}{X}_{i}/{b}_{n}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}},$
(1.5)

where $\delta =\gamma \left(\alpha -1\right)/\alpha \left(1+\gamma \right)$.

Wu generalized and improved the above Theorem  1.1 to ND random variables, removed the identically distributed condition as follows.

Theorem 1.3 (Wu )

Let $\left\{{X}_{n};n\ge 1\right\}$ be a sequence of ND random variables which is stochastically dominated by a random variable X. Suppose that $0<\alpha ,\beta <\mathrm{\infty }$, $0 and $1/p=1/\alpha +1/\beta$. If $\beta >1$, further assume that $E{X}_{n}=0$. Let $\left\{{a}_{ni};1\le i\le n,n\ge 1\right\}$ be an array of real constants such that
$\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }\ll n.$
(1.6)
If $E|{X}_{n}{|}^{\beta }<\mathrm{\infty }$, then
$\underset{n\to \mathrm{\infty }}{lim}{n}^{-1/p}\left(\sum _{i=1}^{n}{a}_{ni}{X}_{i}\right)=0,\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(1.7)

Recently, Cai and Wang et al. have studied the complete convergence for NA random variables and arrays of rowwise ND random variables under the exponential moment conditions respectively. Their results generalize and improve the above Theorem  1.2 to NA and ND random variables. Inspired by Cai , Wang et al. , and other papers mentioned above, we investigate the limit behavior of ND random variables and obtain some complete convergence results. We use methods different from those of Cai and Wang et al. .

The main purpose of this paper is to further study the complete convergence for ND random variables and arrays of rowwise ND random variables under weaker moment conditions. As applications, the complete convergence for linear statistics of ND random variables and arrays of rowwise ND random variables are obtained without assumptions of being identically distributed. The results obtained not only generalize the above Theorem 1.2 to the case of ND and arrays of rowwise ND random variables, but also partly improve the corresponding results of Cai  and Wang et al. .

Throughout this paper, C will represent a positive constant whose value may change from one appearance to the next, and ${a}_{n}=O\left({b}_{n}\right)$ will mean ${a}_{n}\le C\left({b}_{n}\right)$.

We will use the following concept in this paper. Let $\left\{{X}_{n};n\ge 1\right\}$ be a sequence of random variables, and let X be a nonnegative random variable. If there exists a constant C ($0) such that
$P\left(|{X}_{n}|\ge t\right)\le CP\left(X\ge t\right),$
(1.8)

for all $t\ge 0$ and $n\ge 1$. Then $\left\{{X}_{n};n\ge 1\right\}$ is said to be stochastically dominated by X.

## 2 Main results and proofs

Now, we state and prove our main results of this paper.

Theorem 2.1 Let $\left\{{X}_{n};n\ge 1\right\}$ be a sequence of ND random variables which is stochastically dominated by a random variable X. Let ${T}_{n}={\sum }_{i=1}^{n}{a}_{ni}{X}_{i}$, $n\ge 1$, where the weights $\left\{{a}_{ni}:1\le i\le n,n\ge 1\right\}$ are a triangular array of real constants such that ${a}_{ni}=0$ for $i>n$. Let
${A}_{\beta }=\underset{n\to \mathrm{\infty }}{lim sup}{A}_{\beta ,n}<\mathrm{\infty };\phantom{\rule{1em}{0ex}}{A}_{\beta ,n}={n}^{-1}\sum _{i=1}^{n}|{a}_{ni}{|}^{\beta },$
(2.1)
where $\beta =max\left(\alpha ,\gamma \right)$ for some $0<\alpha \le 2$, $\gamma >0$ and $\alpha \ne \gamma$. Assume that $E{X}_{n}=0$ for $1<\alpha \le 2$ and $E|X{|}^{\beta }<\mathrm{\infty }$. Then
(2.2)

where ${b}_{n}={n}^{1/\alpha }{\left(logn\right)}^{1/\gamma }$.

In order to prove our results, we need the following lemmas.

Lemma 2.1 (Bozorgnia et al. )

Let random variables ${X}_{1},{X}_{2},\dots ,{X}_{n}$ be ND, ${f}_{1},{f}_{2},\dots ,{f}_{n}$ be all nondecreasing (or all nonincreasing) functions, then random variables ${f}_{1}\left({X}_{1}\right),{f}_{2}\left({X}_{2}\right),\dots ,{f}_{n}\left({X}_{n}\right)$ are ND.

Lemma 2.2 (Asadian et al. )

Let $\left\{{X}_{n};n\ge 1\right\}$ be a sequence of ND random variables with $E{X}_{n}=0$ and $E|{X}_{n}{|}^{p}<\mathrm{\infty }$ for some $p\ge 2$ and for all $n\ge 1$. Then there exists a positive constant $C=C\left(p\right)$ depending only on p such that for all $n\ge 1$,
$E\left(|\sum _{i=1}^{n}{X}_{i}{|}^{p}\right)\le C\left[\sum _{i=1}^{n}E|{X}_{i}{|}^{p}+{\left(\sum _{i=1}^{n}\left(E{X}_{i}^{2}\right)\right)}^{p/2}\right].$
(2.3)
Lemma 2.3 Let X be a random variable and $\left\{{a}_{ni}:1\le i\le n,n\ge 1\right\}$ be an array of constants satisfying (2.1), ${b}_{n}={n}^{1/\alpha }{\left(logn\right)}^{1/\gamma }$. Then
$\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}X|>{b}_{n}\right)\le CE|X{|}^{\beta }.$
(2.4)

The proof is similar to that of Lemma  2.3 of Sung . So, we omit it.

Lemma 2.4 Let $\left\{{X}_{n};n\ge 1\right\}$ be a sequence of random variables which is stochastically dominated by a random variable X. For any $u>0$, $t>0$ and $n\ge 1$, the following two statements hold:
Proof of Theorem 2.1 Without loss of generality, assume that ${a}_{ni}\ge 0$ for all $1\le i\le n$, $n\ge 1$. Define that
By Lemma 2.1, we can see that for fixed $n\ge 1$, $\left\{{X}_{i}^{\left(n\right)};i\ge 1\right\}$ is still a sequence of ND random variables. It is easy to check that for any $\epsilon >0$,
$\left(|\sum _{i=1}^{n}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)\subset \left(\underset{1\le i\le n}{max}|{a}_{ni}{X}_{i}|>{b}_{n}\right)\cup \left(|\sum _{i=1}^{n}{X}_{i}^{\left(n\right)}|>\epsilon {b}_{n}\right),$
which implies that
$\begin{array}{rcl}P\left(|{T}_{n}|>\epsilon {b}_{n}\right)& \le & P\left(\underset{1\le i\le n}{max}|{a}_{ni}{X}_{i}|>{b}_{n}\right)+P\left(|{T}_{n}^{\left(n\right)}+\sum _{i=1}^{n}E{X}_{i}^{\left(n\right)}|>\epsilon {b}_{n}\right)\\ \le & \sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{i}|>{b}_{n}\right)+P\left(|{T}_{n}^{\left(n\right)}|>\epsilon {b}_{n}-|\sum _{i=1}^{n}E{X}_{i}^{\left(n\right)}|\right).\end{array}$
(2.7)
Firstly, we will show that
(2.8)
If $\gamma >\alpha$, by ${\sum }_{i=1}^{n}|{a}_{ni}{|}^{\beta }={\sum }_{i=1}^{n}|{a}_{ni}{|}^{\gamma }\le Cn$ and Lyapunov’s inequality,
$\frac{1}{n}\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }\le {\left(\frac{1}{n}\sum _{i=1}^{n}|{a}_{ni}{|}^{\gamma }\right)}^{\alpha /\gamma }\le C,$

which implies that $\frac{1}{n}×n×{max}_{1\le i\le n}|{a}_{ni}{|}^{\alpha }={max}_{1\le i\le n}|{a}_{ni}{|}^{\alpha }\le C$ for $\mathrm{\forall }n\ge 1$.

If $\gamma <\alpha$, it easily follows that ${max}_{1\le i\le n}|{a}_{ni}{|}^{\alpha }\le C$ for $\mathrm{\forall }n\ge 1$.

For $0<\alpha \le 1$, it follows from Lemma 2.4 and $E|X{|}^{\beta }<\mathrm{\infty }$ that
For $1<\alpha \le 2$, it follows from Lemma 2.4, $E{X}_{n}=0$ and $E|X{|}^{\beta }<\mathrm{\infty }$ that
From (2.10) and (2.11), we can get (2.9) immediately. Hence, for n large enough,
$P\left(|{T}_{n}|>\epsilon {b}_{n}\right)\le \sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{i}|>{b}_{n}\right)+P\left(|{T}_{n}^{\left(n\right)}|>\frac{\epsilon {b}_{n}}{2}\right).$
(2.11)
It follows from Lemma 2.4 and Lemma 2.3 that
$\begin{array}{rcl}I& \triangleq & \sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{i}|>{b}_{n}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}X|>{b}_{n}\right)\\ \le & CE|X{|}^{\beta }<\mathrm{\infty },\end{array}$
(2.14)

where $\beta =max\left(\alpha ,\gamma \right)$ for some $0<\alpha \le 2$ and $\gamma >0$.

It follows from Lemma 2.2 and the Markov inequality that
$\begin{array}{rcl}\mathit{II}& \triangleq & \sum _{n=1}^{\mathrm{\infty }}{n}^{-1}P\left(|{T}_{n}^{\left(n\right)}|>\frac{\epsilon {b}_{n}}{2}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}E\left(|{T}_{n}^{\left(n\right)}{|}^{p}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\left[\sum _{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{p}+{\left(\sum _{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{p/2}\right]\\ \triangleq & {\mathit{II}}_{1}+{\mathit{II}}_{2}.\end{array}$
(2.15)
Let $p\ge max\left\{2,\gamma \right\}$, $\alpha <\gamma$, note that $E|X{|}^{\beta }=E|X{|}^{\gamma }<\mathrm{\infty }$. It follows from Lemma 2.4, Lemma 2.3 and the Markov inequality that
$\begin{array}{rcl}{\mathit{II}}_{1}& =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\sum _{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{p}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\left\{\sum _{i=1}^{n}|{a}_{ni}{|}^{p}E|{X}_{i}{|}^{p}I\left(|{a}_{ni}{X}_{i}|\le {b}_{n}\right)+\sum _{i=1}^{n}{b}_{n}^{p}P\left(|{a}_{ni}{X}_{i}|>{b}_{n}\right)\right\}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\left\{\sum _{i=1}^{n}|{a}_{ni}{|}^{p}E|X{|}^{p}I\left(|{a}_{ni}X|\le {b}_{n}\right)+\sum _{i=1}^{n}{b}_{n}^{p}P\left(|{a}_{ni}X|>{b}_{n}\right)\right\}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-\gamma }\sum _{i=1}^{n}|{a}_{ni}{|}^{\gamma }E|X{|}^{\gamma }+C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}X|>{b}_{n}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-\gamma /\alpha }{\left(logn\right)}^{-1}+CE|X{|}^{\gamma }<\mathrm{\infty }.\end{array}$
(2.16)
If $\alpha >\gamma$, note that $E|X{|}^{\beta }=E|X{|}^{\alpha }<\mathrm{\infty }$. We can get that
$\begin{array}{rcl}{\mathit{II}}_{1}& =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\sum _{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{p}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\left\{\sum _{i=1}^{n}|{a}_{ni}{|}^{p}E|{X}_{i}{|}^{p}I\left(|{a}_{ni}{X}_{i}|\le {b}_{n}\right)+\sum _{i=1}^{n}{b}_{n}^{p}P\left(|{a}_{ni}{X}_{i}|>{b}_{n}\right)\right\}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-\alpha }\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }E|X{|}^{\alpha }+C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}X|>{b}_{n}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(logn\right)}^{-\alpha /\gamma }+CE|X{|}^{\alpha }<\mathrm{\infty }.\end{array}$
(2.17)

Next, we will prove ${\mathit{II}}_{2}=C{\sum }_{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left({\sum }_{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{p/2}<\mathrm{\infty }$ in the following two cases.

If $\alpha <\gamma \le 2$ or $\gamma <\alpha \le 2$, let $p\ge max\left\{2,\frac{2\gamma }{\alpha }\right\}$. Noting that $E|X{|}^{\alpha }<\mathrm{\infty }$, we can get that
$\begin{array}{rcl}{\mathit{II}}_{2}& =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left(\sum _{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left(\sum _{i=1}^{n}E|{a}_{ni}{X}_{i}{|}^{2}I\left(|{a}_{ni}{X}_{i}|\le {b}_{n}\right)\right)}^{p/2}\\ +C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left(\sum _{i=1}^{n}|{b}_{n}{|}^{2}P\left(|{a}_{ni}{X}_{i}|>{b}_{n}\right)\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left(\sum _{i=1}^{n}E|{a}_{ni}X{|}^{2}I\left(|{a}_{ni}X|\le {b}_{n}\right)+{b}_{n}^{2}P\left(|{a}_{ni}X|>{b}_{n}\right)\right)}^{p/2}\\ +C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(\sum _{i=1}^{n}P\left(|{a}_{ni}X|>{b}_{n}\right)\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(\sum _{i=1}^{n}E\frac{|{a}_{ni}X{|}^{2}}{{b}_{n}^{2}}I\left(|{a}_{ni}X|\le {b}_{n}\right)\right)}^{p/2}+C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(\sum _{i=1}^{n}P\left(|{a}_{ni}X|>{b}_{n}\right)\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(\sum _{i=1}^{n}E\frac{|{a}_{ni}X{|}^{\alpha }}{{b}_{n}^{\alpha }}I\left(|{a}_{ni}X|\le {b}_{n}\right)\right)}^{p/2}+C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(\sum _{i=1}^{n}E\frac{|{a}_{ni}X{|}^{\alpha }}{{b}_{n}^{\alpha }}\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-\alpha p/2}{\left(\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }E|X{|}^{\alpha }\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(logn\right)}^{-\alpha p/\left(2\gamma \right)}<\mathrm{\infty }.\end{array}$
(2.18)
If $\gamma >2\ge \alpha$ or $\gamma \ge 2>\alpha$, by ${\sum }_{i=1}^{n}|{a}_{ni}{|}^{\beta }={\sum }_{i=1}^{n}|{a}_{ni}{|}^{\gamma }\le Cn$ and Lyapunov’s inequality,
we can obtain that
$\frac{1}{n}\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }\le {\left(\frac{1}{n}\sum _{i=1}^{n}|{a}_{ni}{|}^{\gamma }\right)}^{\alpha /\gamma }\le C,$
which implies that ${\sum }_{i=1}^{n}|{a}_{ni}{|}^{\alpha }=O\left(n\right)$. Hence, from ${max}_{1\le i\le n}|{a}_{ni}{|}^{\alpha }\le Cn$ and the Hölder inequality, then $\mathrm{\forall }k\ge \alpha$,
$\sum _{i=1}^{n}|{a}_{ni}{|}^{k}=\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }|{a}_{ni}{|}^{k-\alpha }\le Cn{n}^{\frac{k-\alpha }{\alpha }}\le C{n}^{\frac{k}{\alpha }}.$
So,
$\sum _{i=1}^{n}|{a}_{ni}{|}^{2}=O\left({n}^{2/\alpha }\right).$
Let $p>\gamma$,
$\begin{array}{rcl}{\mathit{II}}_{2}& =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}\left[{\left(\sum _{i=1}^{n}E|{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{p/2}\right]\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left(\sum _{i=1}^{n}|{a}_{ni}{|}^{2}\right)}^{p/2}+C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{\left(\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }\right)}^{p/2}\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{b}_{n}^{-p}{n}^{p/\alpha }\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}{\left(logn\right)}^{-p/\gamma }<\mathrm{\infty }.\end{array}$
(2.19)

Hence, the desired result (2.2) follows from (2.15)-(2.19) immediately. The proof of Theorem 2.1 is complete. □

Similar to the proof of Theorem 2.1, we can get the following results for arrays of rowwise ND random variables.

Theorem 2.2 Let $\left\{{X}_{ni};i\ge 1,n\ge 1\right\}$ be an array of rowwise ND random variables which is stochastically dominated by a random variable X. Let ${T}_{n}={\sum }_{i=1}^{n}{a}_{ni}{X}_{ni}$, $n\ge 1$, where the weights $\left\{{a}_{ni}:1\le i\le n,n\ge 1\right\}$ are a triangular array of real constants such that (2.1). Let ${b}_{n}={n}^{1/\alpha }{\left(logn\right)}^{1/\gamma }$. If $E{X}_{ni}=0$ for $1<\alpha \le 2$ and $E|X{|}^{\beta }<\mathrm{\infty }$, then (2.2) holds true.

Remark 2.3 Note that the results of Cai  and Wang et al.  provide a stronger conclusion on the complete convergence for maximums of partial sums under the exponential moment condition than the results presented in Theorem 2.1 and Theorem 2.2 above for $\alpha \ne \gamma$; that is, they obtained results of the form

It is still an open problem to obtain results of this type for ND random variables under the conditions of $\alpha \ne \gamma$ and $\alpha =\gamma$. One suggests that a solution can be obtained if a better moment inequality than that presented above in Lemma 2.2 could be established.

## Declarations

### Acknowledgements

The authors are very grateful to the anonymous referees and the editor Prof Andrei Volodin for their valuable comments and some helpful suggestions that improved the clarity and readability of the article. This work was supported by the Project Supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning (47), the National Natural Science Foundation of China (No:71271042, 11061012), the Plan of Jiangsu Specially-Appointed Professors and the Major Program of Key Research Center in Financial Risk Management of Jiangsu Universities of philosophy and social sciences (No:2012JDXM009).

## Authors’ Affiliations

(1)
School of Mathematics Science, University of Electronic Science and Technology of China, Chengdu, 610054, P.R. China
(2)
College of Science, Guilin University of Technology, Guilin, 541004, P.R. China
(3)
Institute of Financial Engineering, School of Finance, School of Applied Mathematics, Nanjing Audit University, Nanjing, 211815, P.R. China

## References 