# A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables

## Abstract

In this article, applying moment inequality of negatively dependent (ND) random variables which obtained by Asadian et al., the complete convergence theorem for weighted sums of arrays of rowwise ND random variables is discussed. As a result, the complete convergence theorem for ND arrays of random variables is extended. Our results generalize and improve those on complete convergence theorem previously obtained by Hu et al., Ahmed et al, Volodin, and Sung from the independent and identically distributed case to ND sequences.

Mathematical Subject Classification: 62F12.

## 1 Introduction

Random variables X and Y are said to be negatively dependent (ND) if

$P\left(X\le x,Y\le y\right)\le P\left(X\le x\right)P\left(Y\le y\right)$
(1.1)

for all x, y R. A collection of random variables is said to be pairwise negatively dependent (PND) if every pair of random variables in the collection satisfies (1.1).

It is important to note that (1.1) implies

$P\left(X>x,Y>y\right)\le P\left(X>x\right)P\left(Y>y\right)$
(1.2)

for all x, y R. Moreover, it follows that (1.2) implies (1.1), and hence, (1.1) and (1.2) are equivalent. However, (1.1) and (1.2) are not equivalent for a collection of three or more random variables. Consequently, the following definition is needed to define sequences of ND random variables.

Definition 1. Random variables X1, ..., X n are said to be ND if for all real x1, ..., x n ,

$\begin{array}{ll}\hfill P\left(\bigcap _{j=1}^{n}\left({X}_{j}\le {x}_{j}\right)\right)& \le \prod _{j=1}^{n}P\left({X}_{j}\le {x}_{j}\right),\phantom{\rule{2em}{0ex}}\\ \hfill P\left(\bigcap _{j=1}^{n}\left({X}_{j}>{x}_{j}\right)\right)& \le \prod _{j=1}^{n}P\left({X}_{j}>{x}_{j}\right).\phantom{\rule{2em}{0ex}}\end{array}$

An infinite sequence of random variables {X n ; n ≥ 1} is said to be ND if every finite subset X1, ..., X n is ND.

Definition 2. Random variables X1, X2, ..., X n , n ≥ 2 are said to be negatively associated (NA) if for every pair of disjoint subsets A1 and A2 of {1, 2, ..., n},

$\mathsf{\text{cov}}\phantom{\rule{1em}{0ex}}\left({f}_{1}\left({X}_{i};i\in {A}_{1}\right),\phantom{\rule{2.77695pt}{0ex}}{f}_{2}\left({X}_{j};j\in {A}_{2}\right)\right)\le 0,$

where f1 and f2 are increasing for every variable (or decreasing for every variable), such that this covariance exists. An infinite sequence of random variables {X n ; n ≥ 1} is said to be NA if every finite subfamily is NA.

The definition of PND was given by Lehmann [1], the concept of ND and NA was introduced by Joag-Dev and Proschan [2]. These concepts of dependence random variables are very useful to reliability theory and applications.

Obviously, NA implies ND from the definition of NA and ND. But ND does not imply NA, so ND is much weaker than NA. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A series of useful results have been established [312]. Hence, the extending the limit properties of independent variables to the case of ND variables is highly desirable and of considerably significance in the theory and application.

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [13] as follows. A sequence {X n ; n ≥ 1} of random variables converges completely to the constant a if ${\sum }_{n=1}^{\infty }P\left(\left|{X}_{n}-a\right|>\epsilon \right)<\infty$ for all ϵ > 0. In view of the Borel-Cantelli lemma, this implies that X n a almost surely. The converse is true if {X n ; n ≥ 1} are independent random variables. Thus, complete convergence is one of the most important problems in probability theory. Hsu and Robbins [13] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Baum and Katz [14] proved that if {X, X n ; n ≥ 1} is a sequence of i.i.d. random variables with mean zero, then E|X|p(t+2)< ∞ (1 ≤ p < 2, t ≥ 1) is equivalent to the condition that ${\sum }_{n=1}^{\infty }{n}^{t}P\left(\left|{\sum }_{i=1}^{n}{X}_{i}\right|/{n}^{1/p}>\epsilon \right)<\infty$ for all ϵ > 0. Some recent results can be found in [12, 1517].

In this article we study the complete convergence for ND random variables. Our results generalize and improve those on complete convergence theorem previously obtained by Hu et al. [16], Ahmed et al. [15], Volodin [17] and Sung [18] from the i.i.d. case to ND sequences.

## 2 Main results

Theorem 1. Let {X nk ; k, n ≥ 1} be an array of rowwise ND random variables, there exist a r.v. X and a positive constant c satisfying

$P\left(\left|{X}_{nk}\right|\ge x\right)\le cP\left(\left|X\right|\ge x\right)\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}n,\phantom{\rule{1em}{0ex}}k\ge 1,\phantom{\rule{1em}{0ex}}x>0.$
(2.1)

Suppose that β > -1, and that {a nk ; k, n ≥ 1} is an array of constants such that

$\underset{k\ge 1}{\text{sup}}\left|{a}_{nk}\right|=O\left({n}^{-\gamma }\right)\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{some}}\phantom{\rule{2.77695pt}{0ex}}\gamma >0,$
(2.2)

and

${\sum }_{k=1}^{\infty }{\left|{a}_{nk}\right|}^{\theta }=O\left({n}^{\alpha }\right),\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{some}}\phantom{\rule{2.77695pt}{0ex}}\alpha <2\gamma \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{and}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{some}}\phantom{\rule{2.77695pt}{0ex}}0<\theta <\text{min}\left(2,2-\alpha /\gamma \right).$
(2.3)
1. (i)

If 1 + α + β > 0 and

$E{\left|X\right|}^{v}<\infty ,\phantom{\rule{1em}{0ex}}v=\theta +\frac{1+\alpha +\beta }{\gamma }.$
(2.4)

When v ≥ 1, further assume that EX nk = 0 for any n, k ≥ 1. Then

$\sum _{n=1}^{\infty }{n}^{\beta }P\left(\left|\sum _{k=1}^{\infty }{a}_{nk}{X}_{nk}\right|>\epsilon \right)<\infty ,\phantom{\rule{1em}{0ex}}\forall \epsilon >0.$
(2.5)
1. (ii)

If 1 + α + β = 0 and

$E\left({\left|X\right|}^{\theta }\text{ln}\left(1+\left|X\right|\right)\right)<\infty .$
(2.6)

When v = θ ≥ 1, further assume that EX nk = 0 for any n, k ≥ 1. Then (2.5) holds.

Remark 2. Theorem 1 generalize and improve those on complete convergence theorem previously obtained by Hu et al. [16], Ahmed et al. [15], Volodin [17], and Sung [18] from the i.i.d. case to ND arrays.

By using Theorrem 1, we can extend the well-known Baum and Katz [14] complete convergence theorem from the i.i.d. case to ND random variables.

Corollary 3. Let {X n , n N} be a sequence of ND random variables, there exist a r.v. X and a constant c satisfying P(|X n | ≥ x) ≤ cP(|X| ≥ x) for all n ≥ 1, x > 0. Suppose γ > 1/2 and γp > 1; and if p ≥ 1 then assume also that EX n = 0 for any n ≥ 1. If E|X|p< ∞, then

$\sum _{n=1}^{\infty }{n}^{\gamma p-2}P\left(\left|{S}_{n}\right|>\epsilon {n}^{\gamma }\right)<\infty ,\phantom{\rule{1em}{0ex}}\forall \epsilon >0,$

where ${S}_{n}={\sum }_{k=1}^{n}{X}_{k}$.

## 3 Proofs

In the following, let a n b n denote that there exists a constant c > 0 such that a n cb n for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another.

Lemma 1. [3] Let X1, ..., X n be ND random variables and let {f n ; n ≥ 1} be a sequence of Borel functions all of which are monotone increasing (or all are monotone decreasing). Then {f n (X n ); n ≥ 1} is still a sequence of ND r.v.'s.

Lemma 2. [9] Let {X n ; n ≥ 1} be an ND sequence with EX n = 0 and E|X n |p< ∞, p ≥ 2. Then

$E{\left|{S}_{n}\right|}^{p}\le {c}_{p}\left\{\sum _{i=1}^{n}E{\left|{X}_{i}\right|}^{p}+{\left(\sum _{i=1}^{n}E{X}_{i}^{2}\right)}^{p/2}\right\},$

where c p > 0 depends only on p.

Lemma 3. [19] Let {X n ; n ≥ 1} be an arbitrary sequence of random variables. If there exist a r.v. X and a positive constant c such that P(|X n | ≥ x) ≤ cP(|X| ≥ x) for and n ≥ 1 and x > 0. Then for any u > 0, t > 0, and n ≥ 1,

$E{\left|{X}_{n}\right|}^{u}{I}_{\left(\left|{X}_{n}\right|\le t\right)}\le c\left(E{\left|X\right|}^{u}{I}_{\left(\left|X\right|\le t\right)}+{t}^{u}P\left(\left|X\right|>t\right)\right),$

and

$E{\left|{X}_{n}\right|}^{u}{I}_{\left(\left|{X}_{n}\right|>t\right)}\le cE{\left|X\right|}^{u}{I}_{\left(\left|X\right|>t\right)}.$

Proof of Theorem 1. Let ${a}_{nk}^{+}=\text{max}\left({a}_{nk},0\right)\ge 0$ and ${a}_{nk}^{-}=\text{max}\left(-{a}_{nk},0\right)\ge 0$. From (2.2), (2.5) and

$\sum _{n=1}^{\infty }{n}^{\beta }P\left(\left|\sum _{k=1}^{\infty }{a}_{nk}{X}_{nk}\right|>\epsilon \right)\le \sum _{n=1}^{\infty }{n}^{\beta }P\left(\left|\sum _{k=1}^{\infty }{a}_{nk}^{+}{X}_{nk}\right|>\epsilon /2\right)+\sum _{n=1}^{\infty }{n}^{\beta }P\left(\left|\sum _{k=1}^{\infty }{a}_{nk}^{-}{X}_{nk}\right|>\epsilon /2\right),$

without loss of generality, for all i, n ≥ 1, we can assume that a ni > 0 and

$\underset{k\ge 1}{\text{sup}}{a}_{nk}={n}^{-\gamma }.$
(3.1)

For any k, n ≥ 1, let

${Y}_{nk}=-{a}_{nk}^{-1}{I}_{\left({a}_{nk}{X}_{nk}<-1\right)}+{X}_{nk}{I}_{\left({a}_{nk}\left|{X}_{nk}\right|\le 1\right)}+{a}_{nk}^{-1}{I}_{\left({a}_{nk}{X}_{nk}>1\right)}.$

Then for any n ≥ 1,

$\left\{\left|\sum _{k=1}^{\infty }{a}_{nk}{X}_{nk}\right|>\epsilon \right\}=\left\{\forall k\ge 1,\left|{a}_{nk}{X}_{nk}\right|\le 1,\left|\sum _{k=1}^{\infty }{a}_{nk}{Y}_{nk}\right|>\epsilon \right\}\cup \left\{\exists k\ge 1,\left|{a}_{nk}{X}_{nk}\right|>1\right\}.$

Hence

$\begin{array}{ll}\hfill \sum _{n=1}^{\infty }{n}^{\beta }P\left(\sum _{k=1}^{\infty }\left|{a}_{nk}{X}_{nk}\right|>\epsilon \right)& \le \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }P\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)+\sum _{n=1}^{\infty }{n}^{\beta }P\left(\left|\sum _{k=1}^{\infty }{a}_{nk}{Y}_{nk}\right|>\epsilon \right)\phantom{\rule{2em}{0ex}}\\ \stackrel{\wedge }{=}{J}_{1}+{J}_{2}.\phantom{\rule{2em}{0ex}}\end{array}$
(3.2)

Therefore, in order to prove (2.5), it suffices to prove that J1 < ∞ and J2 < ∞.

1. (i)

If 1 + α + β > 0, by Lemma 3, (2.1), (2.3), (2.4), (3.1), and the Markov inequality, we have

$\begin{array}{ll}\hfill {J}_{1}& \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }P\left(\left|{a}_{nk}X\right|>1\right)\phantom{\rule{2em}{0ex}}\\ \le \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }E{\left|{a}_{nk}X\right|}^{\theta }{I}_{\left(\left|X\right|>{a}_{nk}^{-1}\right)}\phantom{\rule{2em}{0ex}}\\ \le \sum _{n=1}^{\infty }{n}^{\beta }E{\left|X\right|}^{\theta }{I}_{\left(\left|X\right|>{n}^{\gamma }\right)}\sum _{k=1}^{\infty }{a}_{nk}^{\theta }\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta +\alpha }E{\left|X\right|}^{\theta }{I}_{\left(\left|X\right|>{n}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ =\sum _{n=1}^{\infty }{n}^{\beta +\alpha }\sum _{j=n}^{\infty }E{\left|X\right|}^{\theta }{I}_{\left({j}^{\gamma }<\left|X\right|\le {\left(j+1\right)}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ =\sum _{j=1}^{\infty }E{\left|X\right|}^{\theta }{I}_{\left({j}^{\gamma }<\left|X\right|\le {\left(j+1\right)}^{\gamma }\right)}\sum _{n=1}^{j}{n}^{\beta +\alpha }\phantom{\rule{2em}{0ex}}\\ \le \sum _{j=1}^{\infty }{j}^{1+\alpha +\beta }E{\left|X\right|}^{\theta }{I}_{\left({j}^{\gamma }<\left|X\right|\le {\left(j+1\right)}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{j=1}^{\infty }E{\left|X\right|}^{\theta +\left(1+\alpha +\beta \right)/\gamma }{I}_{\left({j}^{\gamma }<\left|X\right|\le {\left(j+1\right)}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ <\infty .\phantom{\rule{2em}{0ex}}\end{array}$
(3.3)

Next we prove that J2 < ∞ for v < 1 and v ≥ 1, respectively. Put $N\left(nk\right)=♯\left\{i;{\left(nk\right)}^{\gamma }\le {a}_{ni}^{-1}<{\left(n\left(k+1\right)\right)}^{\gamma }\right\},\phantom{\rule{2.77695pt}{0ex}}k,\phantom{\rule{2.77695pt}{0ex}}n\ge 1$.

1. (a)

If v < 1. Choose t such that v < t < 1. By the Markov inequality, the c γ inequality, Lemma 3 and the process of proof of (3.3), we have

$\begin{array}{c}{J}_{2}\ll \sum _{n=1}^{\infty }{n}^{\beta }E{|\sum _{k=1}^{\infty }{a}_{nk}{Y}_{nk}|}^{t}\le \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }E{|{a}_{nk}{Y}_{nk}|}^{t}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }\left\{E{|{a}_{nk}{X}_{nk}|}^{t}{I}_{\left({a}_{nk}|{X}_{nk}|\le 1\right)}+P\left(|{a}_{nk}{X}_{nk}|>1\right)\right\}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }\left\{E{|{a}_{nk}X|}^{t}{I}_{\left({a}_{nk}|X|\le 1\right)}+P\left(|{a}_{nk}X|>1\right)\right\}\\ =\sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }\sum _{\left(nj\right)\gamma \le {a}_{nk}^{-1}<\left(n\left(j+1\right)\right)\gamma }\left\{{a}_{nk}^{t}E{|X|}^{t}{I}_{\left(|X|\le {a}_{nk}^{-1}\right)}+P\left(|{a}_{nk}X|>1\right)\right\}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }N\left(nj\right){\left(nj\right)}^{-\gamma t}E{|X|}^{t}{I}_{\left(|X|<\left(n\left(j+1\right)\gamma \right)}\\ =\sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }N\left(nj\right){\left(nj\right)}^{-\gamma t}\sum _{i=1}^{n\left(j+1\right)}E{|X|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le |X|
(3.4)

Since t > v and γ > 0, so t > θ, (1 + 1/k)-γθ≥ 2-γθand (nk)γ(t-θ)≥ (nj)γ(t-θ), kj.

Therefore, by (2.3) we have

$\begin{array}{ll}{n}^{\alpha }\hfill & \gg \sum _{i=1}^{\infty }{a}_{ni}^{\theta }=\sum _{k=1}^{\infty }\sum _{{\left(nk\right)}^{\gamma }\le {a}_{ni}^{-1}<{\left(n\left(k+1\right)\right)}^{\gamma }}{a}_{ni}^{\theta }\hfill \\ \ge \sum _{k=1}^{\infty }N\left(nk\right){\left(n\left(k+1\right)\right)}^{-\gamma \theta }\hfill \\ \ge \sum _{k=1}^{\infty }N\left(nk\right){2}^{-\gamma \theta }{\left(nk\right)}^{-\gamma \theta }\hfill \\ \gg \sum _{k=j}^{\infty }N\left(nk\right){\left(nk\right)}^{-\gamma t}{\left(nk\right)}^{\gamma \left(t-\theta \right)}\hfill \\ \ge \sum _{k=j}^{\infty }N\left(nk\right){\left(nk\right)}^{-\gamma t}{\left(nj\right)}^{\gamma \left(t-\theta \right)}.\hfill \end{array}$

Hence,

$\sum _{k=j}^{\infty }N\left(nk\right){\left(nk\right)}^{-\gamma t}\ll {n}^{\alpha -\gamma \left(t-\theta \right)}{j}^{-\gamma \left(t-\theta \right)},\phantom{\rule{1em}{0ex}}\forall j\in N.$
(3.5)

Combining with (2.4) and $t>v=\theta +\frac{1+\alpha +\beta }{\gamma }$, i.e. α + β - γ(t - θ) < -1, we can get that

$\begin{array}{ll}{J}_{21}\hfill & \ll \sum _{n=1}^{\infty }{n}^{\beta }{n}^{\alpha -\gamma \left(t-\theta \right)}\sum _{i=1}^{2n}E{|X|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le |X|<{i}^{\gamma }\right)}\hfill \\ \ll \sum _{i=2}^{\infty }E{|X|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le |X|<{i}^{\gamma }\right)}\sum _{n=\left[i/2\right]}^{\infty }{n}^{\beta +\alpha -\gamma \left(t-\theta \right)}\hfill \\ \ll \sum _{i=2}^{\infty }{i}^{\beta +\alpha -\gamma \left(t-\theta \right)+1}E{|X|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le |X|<{i}^{\gamma }\right)}\hfill \\ \ll \sum _{i=2}^{\infty }E{|X|}^{\theta +\left(1+\alpha +\beta \right)/\gamma }{I}_{\left({\left(i-1\right)}^{\gamma }\le |X|<{i}^{\gamma }\right)}\hfill \\ <\infty .\hfill \end{array}$
(3.6)

By (3.5),

$\begin{array}{ll}\hfill {J}_{22}& =\sum _{n=1}^{\infty }{n}^{\beta }\sum _{i=2n+1}^{\infty }E{\left|X\right|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\sum _{j=\left[\frac{i}{n}-1\right]}^{\infty }N\left(nj\right){\left(nj\right)}^{-\gamma t}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{i=2n+1}^{\infty }{n}^{\alpha -\gamma \left(t-\theta \right)}{\left(\frac{i}{n}\right)}^{-\gamma \left(t-\theta \right)}E{\left|X\right|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }{i}^{-\gamma \left(t-\theta \right)}E{\left|X\right|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\sum _{n=1}^{\left[i/2\right]}{n}^{\alpha +\beta }\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }{i}^{1+\alpha +\beta -\gamma \left(t-\theta \right)}E{\left|X\right|}^{t}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }E{\left|X\right|}^{\theta +\left(1+\alpha +\beta \right)/\gamma }{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ <\infty .\phantom{\rule{2em}{0ex}}\end{array}$
(3.7)

By (3.2), (3.3), (3.4), (3.6), and (3.7), (2.5) holds.

1. (b)

If v ≥ 1. Since EX nk = 0, E|X|v< ∞, and v ≥ 1, v > θ, β + 1 > 0, γ > 0, by (2.3), (2.4), (3.1), and Lemma 3, we have

$\begin{array}{ll}\hfill \left|\sum _{k=1}^{\infty }{E}_{{a}_{n}k}{Y}_{nk}\right|& \le \left|\sum _{k=1}^{\infty }E{a}_{nk}{X}_{nk}{I}_{\left(\left|{a}_{nk}{X}_{nk}\right|\le 1\right)}\right|+\sum _{k=1}^{\infty }P\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)\phantom{\rule{2em}{0ex}}\\ =\left|\sum _{k=1}^{\infty }E{a}_{nk}{X}_{nk}{I}_{\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)}\right|+\sum _{k=1}^{\infty }EI\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)\phantom{\rule{2em}{0ex}}\\ \ll \sum _{k=1}^{\infty }E{\left|{a}_{nk}{X}_{nk}\right|}^{v}{I}_{\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)}\phantom{\rule{2em}{0ex}}\\ \le \underset{k\ge 1}{\text{sup}}{a}_{nk}^{v-\theta }\sum _{k=1}^{\infty }{a}_{nk}^{\theta }E{\left|X\right|}^{v}{I}_{\left(\left|X\right|>{a}_{nk}^{-1}\right)}\phantom{\rule{2em}{0ex}}\\ \ll {n}^{-\gamma \left(v-\theta \right)+\alpha }E{\left|X\right|}^{v}{I}_{\left(\left|X\right|>{n}^{\gamma }\right)}={n}^{-\left(1+\beta \right)}E{\left|X\right|}^{v}{I}_{\left(\left|X\right|>{n}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \to 0,\phantom{\rule{2.77695pt}{0ex}}n\to \infty .\phantom{\rule{2em}{0ex}}\end{array}$
(3.8)

Thus, in order to prove J2 < ∞, we only need to prove that for all ϵ > 0,

${J}_{2}^{*}=\sum _{n=1}^{\infty }{n}^{\beta }P\left(\left|\sum _{k=1}^{\infty }\left({a}_{nk}{Y}_{nk}-E{a}_{nk}{Y}_{nk}\right)\right|>\epsilon \right)<\infty .$
(3.9)

Obviously, Y nk is monotonic on X nk . By Lemma 1, {a nk Y nk - Ea nk Y nk ; k, n ≥ 1} is also an array of rowwise ND random variables with E(a nk Y nk - Ea nk Y nk ) = 0. And note that γ(2 - θ) - α = γ(2 - α/γ - θ) > 0 from θ < 2 - α/γ and 1 + β > 0 from β > -1, let $t>\text{max}\left(2,\frac{2\left(1+\beta \right)}{\gamma \left(2-\theta \right)-\alpha }\right)$ in Lemma 2, by the Markov inequality, the c r inequality, we have

$\begin{array}{ll}\hfill {J}_{2}^{*}& \ll \sum _{n=1}^{\infty }{n}^{\beta }E{\left|\sum _{k=1}^{\infty }\left({a}_{nk}{Y}_{nk}-E{a}_{nk}{Y}_{nk}\right)\right|}^{t}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\left\{\sum _{k=1}^{\infty }E{\left|{a}_{nk}{Y}_{nk}\right|}^{t}+{\left(\sum _{k=1}^{\infty }E{\left|{a}_{nk}{Y}_{nk}\right|}^{2}\right)}^{t/2}\right\}\phantom{\rule{2em}{0ex}}\\ \stackrel{\wedge }{=}{J}_{21}^{*}+{J}_{22}^{*}.\phantom{\rule{2em}{0ex}}\end{array}$
(3.10)

From the process of the proof of (3.4)-(3.7), we know that

${J}_{21}^{*}<\infty .$
(3.11)

Since θ < min(2, v) and E|X|v< ∞, by Lemma 3, (2.1), (2.3), and (3.1), we have

$\begin{array}{ll}\hfill \sum _{k=1}^{\infty }E{\left|{a}_{nk}{Y}_{nk}\right|}^{2}& \ll \sum _{k=1}^{\infty }E\left({\left|{a}_{nk}X\right|}^{2}{I}_{\left(\left|{a}_{nk}X\right|\le 1\right)}+\sum _{k=1}^{\infty }P\left(\left|{a}_{nk}X\right|>1\right)\right\phantom{\rule{2em}{0ex}}\\ \ll \left\{\begin{array}{cc}\sum _{k=1}^{\infty }E{\left|{a}_{nk}X\right|}^{2}\ll \sum _{k=1}^{\infty }{\left|{a}_{nk}\right|}^{\theta }\underset{k\ge 1}{\text{sup}}{\left|{a}_{nk}\right|}^{2-\theta }\ll {n}^{\alpha -\gamma \left(2-\theta \right)},\hfill & \hfill v\ge 2,\hfill \\ \sum _{k=1}^{\infty }E{\left|{a}_{nk}X\right|}^{v}\ll \sum _{k=1}^{\infty }{\left|{a}_{nk}\right|}^{\theta }\underset{k\ge 1}{\text{sup}}{\left|{a}_{nk}\right|}^{v-\theta }\ll {n}^{\alpha -\gamma \left(v-\theta \right)},\hfill & \hfill v=2.\hfill \end{array}\right\\phantom{\rule{2em}{0ex}}\end{array}$

By the definition of t, t(γ(2 - θ) - α)/2 - β > 1 and t(1 + β)/2 - β > 1, hence

${J}_{22}^{*}\ll \left\{\begin{array}{cc}\sum _{n=1}^{\infty }\frac{1}{{n}^{t\left(\gamma \left(2-\theta \right)-\alpha \right)/2-\beta }},\hfill & \hfill v\ge 2,\hfill \\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\sum _{n=1}^{\infty }\frac{1}{{n}^{t\left(1+\beta \right)/2-\beta }},\hfill & \hfill v<2\hfill \end{array}\right\<\infty .$
(3.12)

By (3.10)-(3.12), we have (3.9), therefore, (2.5) holds.

1. (ii)

If 1 + α + β = 0, then ${\sum }_{n=1}^{j}{n}^{\alpha +\beta }\ll \text{ln}$ j, similar to proof of (3.3), we have

${J}_{1}=\sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }P\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)\ll E\left({\left|X\right|}^{\theta }\text{ln}\left(1+\left|X\right|\right)\right)<\infty$

from (2.6).

1. (a)

When v = θ < 1, similar to the corresponding part of the proof of (3.6) and (3.7), we get that

${J}_{21}\ll E\left({\left|X\right|}^{\theta }\right)<\infty ,$

and

${J}_{22}\ll E\left({\left|X\right|}^{\theta }\text{ln}\left(1+\left|X\right|\right)\right)<\infty ,$

from (2.6). Therefore, (2.5) holds.

1. (b)

When v = θ ≥ 1. Since EX nk = 0, E|X|v< ∞, 1 + α + β = 0, θ < 2, and β > -1, v = θ, (3.8) remains true. Therefore, we only need to prove (3.9). By Lemmas 2 and 3, J 1 < ∞, noting that v = θ and α + β = -1, we have

$\begin{array}{ll}\hfill {J}_{2}^{*}& \ll \sum _{n=1}^{\infty }{n}^{\beta }E{\left(\sum _{k=1}^{\infty }\left({a}_{nk}{Y}_{nk}-E{a}_{nk}{Y}_{nk}\right)\right)}^{2}\ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }E{\left|{a}_{nk}{Y}_{nk}\right|}^{2}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }P\left(\left|{a}_{nk}{X}_{nk}\right|>1\right)+\sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }E{\left|{a}_{nk}X\right|}^{2}{I}_{\left(\left|{a}_{nk}X\right|\le 1\right)}\phantom{\rule{2em}{0ex}}\\ ={J}_{1}+\sum _{n=1}^{\infty }{n}^{\beta }\sum _{k=1}^{\infty }{a}_{nk}^{2}E{X}^{2}{I}_{\left(\left|X\right|\le {a}_{nk}^{-1}\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }\sum _{{\left(nj\right)}^{\gamma }\le {a}_{nk}^{-1}<{\left(n\left(j+1\right)\right)}^{\gamma }}{a}_{nk}^{2}E{X}^{2}{I}_{\left(\left|X\right|\le {a}_{nk}^{-1}\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }N\left(nj\right){\left(nj\right)}^{-2\gamma }E{X}^{2}{I}_{\left(\left|X\right|<{\left(n\left(j+1\right)\right)}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ =\sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }N\left(nj\right){\left(nj\right)}^{-2\gamma }\sum _{i=1}^{n\left(j+1\right)}E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \le \sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }N\left(nj\right){\left(nj\right)}^{-2\gamma }\sum _{i=1}^{2n}E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}+\sum _{n=1}^{\infty }{n}^{\beta }\sum _{j=1}^{\infty }N\left(nj\right){\left(nj\right)}^{-2\gamma }\sum _{i=2n+1}^{n\left(j+1\right)}E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \stackrel{\wedge }{=}{J}_{21}^{*}+{J}_{22}^{*}.\phantom{\rule{2em}{0ex}}\end{array}$
(3.13)

Since v = θ < 2, hence (3.5) also holds for t = 2. Combining with (2.6) and α+β = -1, we can get that

$\begin{array}{ll}\hfill {J}_{21}^{*}& \ll \sum _{n=1}^{\infty }{n}^{\beta }{n}^{\alpha -\gamma \left(2-\theta \right)}\sum _{i=1}^{2n}E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\sum _{n=\left[i/2\right]}^{\infty }{n}^{-1-\gamma \left(2-\theta \right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }{i}^{-\gamma \left(2-\theta \right)}E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }E{\left|X\right|}^{\theta }{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ <\infty .\phantom{\rule{2em}{0ex}}\end{array}$
(3.14)

By (3.5),

$\begin{array}{ll}\hfill {J}_{22}^{*}& =\sum _{n=1}^{\infty }{n}^{\beta }\sum _{i=2n+1}^{\infty }E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\sum _{j=\left[\frac{i}{n}-1\right]}^{\infty }N\left(nj\right){\left(nj\right)}^{-2\gamma }\phantom{\rule{2em}{0ex}}\\ \ll \sum _{n=1}^{\infty }{n}^{\beta }\sum _{i=2n+1}^{\infty }{n}^{\alpha -\gamma \left(2-\theta \right)}{\left(\frac{i}{n}\right)}^{-\gamma \left(2-\theta \right)}E{\left|X\right|}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }{i}^{-\gamma \left(2-\theta \right)}E{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\sum _{n=1}^{\left[i/2\right]}{n}^{-1}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }{i}^{-\gamma \left(2-\theta \right)}\text{ln}iE{X}^{2}{I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ \ll \sum _{i=2}^{\infty }E{\left|X\right|}^{\theta }\text{ln}\left(1+\left|X\right|\right){I}_{\left({\left(i-1\right)}^{\gamma }\le \left|X\right|<{i}^{\gamma }\right)}\phantom{\rule{2em}{0ex}}\\ <\infty .\phantom{\rule{2em}{0ex}}\end{array}$
(3.15)

By (3.13)-(3.15), (3.9) holds.

Proof of Corollary 3. Let ${a}_{nk}=\left\{\begin{array}{cc}{n}^{-\gamma },\hfill & \hfill \phantom{\rule{2.77695pt}{0ex}}k\le n,\hfill \\ 0,\hfill & \hfill k>n,\hfill \end{array}\right\\phantom{\rule{1em}{0ex}}0\le \alpha <2\gamma ,\phantom{\rule{1em}{0ex}}\theta =\frac{1-\alpha }{\gamma },\phantom{\rule{1em}{0ex}}\beta =\gamma p-2$. Then β > -1, 0 < θ < 2 - α/γ, 1 + α + β > 0, θ + (1 + α + β)/γ = p, and ${\sum }_{k=1}^{\infty }{\left|{a}_{nk}\right|}^{\theta }={n}^{\alpha }$. Thus, the conditions of Theorem 1 hold, by Theorem 1,

$\sum _{n=1}^{\infty }{n}^{\gamma p-2}P\left(\left|{S}_{n}\right|>\epsilon {n}^{\gamma }\right)<\infty ,\phantom{\rule{1em}{0ex}}\forall \epsilon >0.$

## References

1. Lehmann EL: Some concepts of dependence. Ann Math Stat 1966, 43: 1137–1153.

2. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann Stat 1983, 11(1):286–295. 10.1214/aos/1176346079

3. Bozorgnia A, Patterson RF, Taylor RL: Limit theorems for ND r.v.'s. University of Georgia, Atlanta; 1993.

4. Bozorgnia A, Patterson RF, Taylor RL: Weak laws of large numbers for negatively dependent random variables in Banach spaces. In Madan Puri Festschrift Brunner. Edited by: E, Denker, M. VSP International Science Publishers, Vilnius; 1996:11–22.

5. Amini M: Some contribution to limit theorems for negatively dependent random variable. Ph.D. thesis 2000.

6. Fakoor V, Azarnoosh HA: Probability inequalities for sums of negatively dependent random variables. Pak J Stat 2005, 21(3):257–264.

7. Sani Nili HR, Amini M, Bozorgnia A: Strong laws for weighted sums of negative dependent random variables. J Sci Islam Repub Iran 2005, 16(3):261–265.

8. Klesov O, Rosalsky A, Volodin A: On the almost sure growth rate of sums of lower negatively dependent nonnegative random variables. Stat Probab Lett 2005, 71: 193–202. 10.1016/j.spl.2004.10.027

9. Asadian N, Fakoor V, Bozorgnia A: Rosen-thal's type inequalities for negatively orthant dependent random variables. J Iran Stat Soc 2006, 5(1–2):66–75.

10. Wu QY, Jiang YY: Strong consistency of M estimator in linear model for negatively dependent random samples. Commun Stat Theory Methods 2011, 40(3):467–491. 10.1080/03610920903427792

11. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J Inequal Appl 2010, 383805: 8.

12. Wu QY: Complete convergence for negatively dependent sequences of random variables. J Inequal Appl 2010, 507293: 10.

13. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc Nat Acad Sci USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

14. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans Am Math Soc 1965, 120: 108–123. 10.1090/S0002-9947-1965-0198524-1

15. Ahmed SE, Antonini Giuliano R, Volodin A: On the rate of complete convergence for weighted sums of arrays of Banach space valued random elements with application to moving average processes. Stat Probab Lett 2002, 58: 185–194. 10.1016/S0167-7152(02)00126-8

16. Hu TC, Li D, Rosalsky A, Volodin AI: On the rate of complete convergence for weighted sums of arrays of Banach space valued random elements. Theory Probab Appl 2002, 47(3):455–468.

17. Volodin A, Antonini Giuliano R, Hu TC: A note on the rate of complete convergence for weighted sums of arrays of Banach space valued random elements. Lobachevskiŏ J Math (Electronic) 2004, 15: 21–33.

18. Sung SH: Complete convergence for weighted sums of random variables. Stat Probab Lett 2007, 77: 303–311. 10.1016/j.spl.2006.07.010

19. Wu QY: Probability Limit Theory for Mixed Sequence. 2006.

## Acknowledgements

The author is very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. Supported by the National Natural Science Foundation of China (11061012), and project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in the Guangxi Institutions of Higher Learning ([2011] 47).

## Author information

Authors

### Corresponding author

Correspondence to Qunying Wu.

### Competing interests

The author declares that she has no competing interests.

## Rights and permissions

Reprints and Permissions

Wu, Q. A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J Inequal Appl 2012, 50 (2012). https://doi.org/10.1186/1029-242X-2012-50