Skip to main content

# Complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables

## Abstract

Let $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ be an array of rowwise $\stackrel{˜}{\rho }$-mixing random variables. Some sufficient conditions for complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables are presented without assumptions of identical distribution. As applications, the Baum and Katz type result and the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of $\stackrel{˜}{\rho }$-mixing random variables are obtained.

MSC:60F15.

## 1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins [1] as follows. A sequence of random variables $\left\{{U}_{n},n\ge 1\right\}$ is said to converge completely to a constant C if ${\sum }_{n=1}^{\mathrm{\infty }}P\left(|{U}_{n}-C|>\epsilon \right)<\mathrm{\infty }$ for all $\epsilon >0$. In view of the Borel-Cantelli lemma, this implies that ${U}_{n}\to C$ almost surely (a.s.). The converse is true if the $\left\{{U}_{n},n\ge 1\right\}$ are independent. Hsu and Robbins [1] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös [2] proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors. See, for example, Spitzer [3], Baum and Katz [4], Gut [5], Zarei [6], and so forth. The main purpose of the paper is to provide complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables.

Firstly, let us recall the definitions of sequences of $\stackrel{˜}{\rho }$-mixing random variables and arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables.

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables defined on a fixed probability space $\left(\mathrm{\Omega },\mathcal{F},P\right)$. Write $\mathcal{FS}=\sigma \left({X}_{i},i\in S\subset \mathbb{N}\right)$. Given two σ-algebras , in , let

$\rho \left(\mathcal{B},\mathcal{R}\right)=\underset{X\in {L}_{2}\left(\mathcal{B}\right),Y\in {L}_{2}\left(\mathcal{R}\right)}{sup}\frac{|EXY-EXEY|}{{\left(VarXVarY\right)}^{1/2}}.$

Define the $\stackrel{˜}{\rho }$-mixing coefficients by

Obviously, $0\le \stackrel{˜}{\rho }\left(k+1\right)\le \stackrel{˜}{\rho }\left(k\right)\le 1$, and $\stackrel{˜}{\rho }\left(0\right)=1$.

Definition 1.1 A sequence $\left\{{X}_{n},n\ge 1\right\}$ of random variables is said to be a $\stackrel{˜}{\rho }$-mixing sequence if there exists $k\in \mathbb{N}$ such that $\stackrel{˜}{\rho }\left(k\right)<1$.

An array $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ of random variables is called rowwise $\stackrel{˜}{\rho }$-mixing random variables if for every $n\ge 1$, $\left\{{X}_{ni},i\ge 1\right\}$ is a sequence of $\stackrel{˜}{\rho }$-mixing random variables.

$\stackrel{˜}{\rho }$-mixing random variables were introduced by Bradley [7] and many applications have been found. $\stackrel{˜}{\rho }$-mixing is similar to ρ-mixing, but both are quite different. Many authors have studied this concept and provided interesting results and applications. See, for example, Bryc and Smolenski [8], Peligrad [9, 10], Peligrad and Gut [11], Utev and Peligrad [12], Gan [13], Cai [14], Zhu [15], Wu and Jiang [16, 17], An and Yuan [18], Kuczmaszewska [19], Sung [20], Wang et al. [2123], and so on.

Recently, An and Yuan [18] obtained a complete convergence result for weighted sums of identically distributed $\stackrel{˜}{\rho }$-mixing random variables as follows.

Theorem 1.1 Let $p>1/\alpha$ and $1/2<\alpha \le 2$. Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of identically distributed $\stackrel{˜}{\rho }$-mixing random variables with $E{X}_{1}=0$. Assume that $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ is an array of real numbers satisfying

$\sum _{i=1}^{n}{|{a}_{ni}|}^{p}=O\left({n}^{\delta }\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for some}}0<\delta <1,$
(1.1)
$\mathrm{♯}{A}_{nk}=\mathrm{♯}\left\{1\le i\le n:{|{a}_{ni}|}^{p}>{\left(k+1\right)}^{-1}\right\}\ge n{e}^{-1/k},\phantom{\rule{1em}{0ex}}\mathrm{\forall }k\ge 1,n\ge 1.$
(1.2)

Then the following statements are equivalent:

1. (i)

$E{|{X}_{1}|}^{p}<\mathrm{\infty }$;

2. (ii)

${\sum }_{n=1}^{\mathrm{\infty }}{n}^{p\alpha -2}P\left({max}_{1\le j\le n}|{\sum }_{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {n}^{\alpha }\right)<\mathrm{\infty }$ for all $\epsilon >0$.

Sung [20] pointed out that the array $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ satisfying both (1.1) and (1.2) does not exist and obtained a new complete convergence result for weighted sums of identically distributed $\stackrel{˜}{\rho }$-mixing random variables as follows.

Theorem 1.2 Let $p>1/\alpha$ and $1/2<\alpha \le 2$. Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of identically distributed $\stackrel{˜}{\rho }$-mixing random variables with $E{X}_{1}=0$. Assume that $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ is an array of real numbers satisfying

$\sum _{i=1}^{n}{|{a}_{ni}|}^{q}=O\left(n\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for some}}q>p.$
(1.3)

If $E{|{X}_{1}|}^{p}<\mathrm{\infty }$, then

$\sum _{n=1}^{\mathrm{\infty }}{n}^{p\alpha -2}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {n}^{\alpha }\right)<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathrm{\forall }\epsilon >0.$
(1.4)

Conversely, if (1.4) holds for any array $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ satisfying (1.3), then $E{|{X}_{1}|}^{p}<\mathrm{\infty }$.

For more details about the complete convergence result for weighted sums of dependent sequences, one can refer to Wu [24, 25], Wang et al. [26, 27], and so forth. The main purpose of this paper is to further study the complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables under mild conditions. The main idea is inspired by Baek et al. [28] and Wu [25]. As applications, the results of Baum and Katz [4] from the i.i.d. case to the arrays of rowwise $\stackrel{˜}{\rho }$-mixing setting are obtained. The Marcinkiewicz-Zygmund type strong law of large numbers for sequences of $\stackrel{˜}{\rho }$-mixing random variables is provided. We give some sufficient conditions for complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables without assumption of identical distribution. The techniques used in the paper are the Rosenthal type inequality and the truncation method.

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance and $⌈x⌉$ denotes the integer part of x. For a finite set A, the symbol ♯A denotes the number of elements in the set A. Let $I\left(A\right)$ be the indicator function of the set A. Denote $logx=lnmax\left(x,e\right)$, ${X}^{+}=max\left(X,0\right)$ and ${X}^{-}=max\left(-X,0\right)$.

The paper is organized as follows. Two important lemmas are provided in Section 2. The main results and their proofs are presented in Section 3. We get complete convergence for arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables which are stochastically dominated by a random variable X.

## 2 Preliminaries

Firstly, we give the definition of stochastic domination.

Definition 2.1 A sequence $\left\{{X}_{n},n\ge 1\right\}$ of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

$P\left(|{X}_{n}|>x\right)\le CP\left(|X|>x\right)$
(2.1)

for all $x\ge 0$ and $n\ge 1$.

An array $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ of rowwise random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

$P\left(|{X}_{ni}|>x\right)\le CP\left(|X|>x\right)$
(2.2)

for all $x\ge 0$, $i\ge 1$ and $n\ge 1$.

The proofs of the main results of the paper are based on the following two lemmas. One is the classic Rosenthal type inequality for $\stackrel{˜}{\rho }$-mixing random variables obtained by Utev and Peligrad [12], the other is the fundamental inequalities for stochastic domination.

Lemma 2.1 (cf. Utev and Peligrad [[12], Theorem 2.1])

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables, $E{X}_{i}=0$, $E{|{X}_{i}|}^{p}<\mathrm{\infty }$ for some $p\ge 2$ and for every $i\ge 1$. Then there exists a positive constant C depending only on p such that

$E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{i}{|}^{p}\right)\le C\left\{\sum _{i=1}^{n}E{|{X}_{i}|}^{p}+{\left(\sum _{i=1}^{n}E{X}_{i}^{2}\right)}^{p/2}\right\}.$

Lemma 2.2 Let $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ be an array of rowwise random variables which is stochastically dominated by a random variable X. For any $\alpha >0$ and $b>0$, the following two statements hold:

$E{|{X}_{ni}|}^{\alpha }I\left(|{X}_{ni}|\le b\right)\le {C}_{1}\left[E{|X|}^{\alpha }I\left(|X|\le b\right)+{b}^{\alpha }P\left(|X|>b\right)\right],$
(2.3)
$E{|{X}_{ni}|}^{\alpha }I\left(|{X}_{ni}|>b\right)\le {C}_{2}E{|X|}^{\alpha }I\left(|X|>b\right),$
(2.4)

where ${C}_{1}$ and ${C}_{2}$ are positive constants.

Proof The proof of this lemma can be found in Wu [29] or Wang et al. [30]. □

## 3 Main results and their applications

In this section, we provide complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables. As applications, the Baum and Katz type result and the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of $\stackrel{˜}{\rho }$-mixing random variables are obtained. Let $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ be an array of rowwise $\stackrel{˜}{\rho }$-mixing random variables. We assume that the mixing coefficients $\stackrel{˜}{\rho }\left(\cdot \right)$ in each row are the same.

Theorem 3.1 Let $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ be an array of rowwise $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X and $E{X}_{ni}=0$ for all $i\ge 1$, $n\ge 1$, $\beta \ge -1$. Let $\left\{{a}_{ni},i\ge 1,n\ge 1\right\}$ be an array of constants such that

$\underset{i\ge 1}{sup}|{a}_{ni}|=O\left({n}^{-r}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for some}}r>0$
(3.1)

and

$\sum _{i=1}^{\mathrm{\infty }}|{a}_{ni}|=O\left({n}^{\alpha }\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for some}}\alpha \in \left[0,r\right).$
(3.2)

Assume further that $1+\alpha +\beta >0$ and there exists some $\delta >0$ such that $\alpha /r+1<\delta \le 2$ and $s=max\left(1+\frac{1+\alpha +\beta }{r},\delta \right)$. If $E{|X|}^{s}<\mathrm{\infty }$, then for all $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|>\epsilon \right)<\mathrm{\infty }.$
(3.3)

If $1+\alpha +\beta <0$ and $E|X|<\mathrm{\infty }$, then (3.3) still holds for all $\epsilon >0$.

Proof Without loss of generality, we assume that ${a}_{ni}>0$ for all $i\ge 1$ and $n\ge 1$ (otherwise, we use ${a}_{ni}^{+}$ and ${a}_{ni}^{-}$ instead of ${a}_{ni}$ respectively, and note that ${a}_{ni}={a}_{ni}^{+}-{a}_{ni}^{-}$). From the conditions (3.1) and (3.2), we assume that

$\underset{i\ge 1}{sup}{a}_{ni}={n}^{-r},\phantom{\rule{2em}{0ex}}\sum _{i=1}^{\mathrm{\infty }}{a}_{ni}={n}^{\alpha },\phantom{\rule{1em}{0ex}}n\ge 1.$
(3.4)

If $1+\alpha +\beta <0$ and $E|X|<\mathrm{\infty }$, then the result can be easily proved by the following:

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|>\epsilon \right)& \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}E|{a}_{ni}{X}_{ni}|\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha +\beta }E|X|<\mathrm{\infty }.\end{array}$

In the following, we consider the case of $1+\alpha +\beta >0$. Denote

${X}_{ni}^{\prime }={a}_{ni}{X}_{ni}I\left(|{a}_{ni}{X}_{ni}|\le 1\right),\phantom{\rule{1em}{0ex}}i\ge 1,n\ge 1.$

It is easy to check that for any $\epsilon >0$,

$\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|>\epsilon \right)\subset \left(\underset{1\le i\le n}{max}|{a}_{ni}{X}_{ni}|>1\right)\cup \left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{ni}^{\prime }|>\epsilon \right),$

which implies that

$\begin{array}{r}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|>\epsilon \right)\\ \phantom{\rule{1em}{0ex}}\le P\left(\underset{1\le i\le n}{max}|{a}_{ni}{X}_{ni}|>1\right)+P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{ni}^{\prime }|>\epsilon \right)\\ \phantom{\rule{1em}{0ex}}\le \sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{ni}|>1\right)+P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{ni}^{\prime }-E{X}_{ni}^{\prime }\right)|>\epsilon -\underset{1\le j\le n}{max}|\sum _{i=1}^{j}E{X}_{ni}^{\prime }|\right).\end{array}$
(3.5)

Firstly, we show that

(3.6)

Actually, by the conditions $E{X}_{ni}=0$, Lemma 2.2, (3.4) and $E{|X|}^{1+\alpha /r}<\mathrm{\infty }$ (since $E{|X|}^{s}<\mathrm{\infty }$), we have that

which implies (3.6). It follows from (3.5) and (3.6) that for n large enough,

$P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|>\epsilon \right)\le \sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{ni}|>1\right)+P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{ni}^{\prime }-E{X}_{ni}^{\prime }\right)|>\frac{\epsilon }{2}\right).$

Hence, to prove (3.3), we only need to show that

$I\doteq \sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{ni}|>1\right)<\mathrm{\infty }$
(3.7)

and

$J\doteq \sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{ni}^{\prime }-E{X}_{ni}^{\prime }\right)|>\frac{\epsilon }{2}\right)<\mathrm{\infty }.$
(3.8)

By (3.4) and $E{|X|}^{s}<\mathrm{\infty }$, we can get that

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{ni}|>1\right)& \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}P\left(|{a}_{ni}X|>1\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}{a}_{ni}E|X|I\left(|X|>\frac{1}{{a}_{ni}}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha +\beta }E|X|I\left(|X|>{n}^{r}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha +\beta }\sum _{k=n}^{\mathrm{\infty }}E|X|I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ =& C\sum _{k=1}^{\mathrm{\infty }}\sum _{n=1}^{k}{n}^{\alpha +\beta }E|X|I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ \le & C\sum _{k=1}^{\mathrm{\infty }}{k}^{1+\alpha +\beta }E|X|I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ \le & C\sum _{k=1}^{\mathrm{\infty }}E{|X|}^{1+\left(1+\alpha +\beta \right)/r}I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ \le & CE{|X|}^{1+\left(1+\alpha +\beta \right)/r}<\mathrm{\infty },\end{array}$

which implies (3.7).

By Markov’s inequality, Lemma 2.1, ${C}_{r}$’s inequality and Jensen’s inequality, we have for $M\ge 2$ that

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{ni}^{\prime }-E{X}_{ni}^{\prime }\right)|>\frac{\epsilon }{2}\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{ni}^{\prime }-E{X}_{ni}^{\prime }\right){|}^{M}\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\left[{\left(\sum _{i=1}^{n}E{|{X}_{ni}^{\prime }|}^{2}\right)}^{M/2}+\sum _{i=1}^{n}E{|{X}_{ni}^{\prime }|}^{M}\right]\\ \phantom{\rule{1em}{0ex}}\doteq {J}_{1}+{J}_{2}.\end{array}$
(3.9)

Take

$M>max\left(2,\frac{2\left(1+\beta \right)}{r\left[\delta -\left(1+\alpha /r\right)\right]},1+\frac{1+\alpha +\beta }{r}\right),$

which implies that $\beta -r\left[\delta -\left(1+\alpha /r\right)\right]M/2<-1$ and $\alpha +\beta -r\left(M-1\right)<-1$. Since $E{|X|}^{\delta }<\mathrm{\infty }$, we have by Lemma 2.2, Markov’s inequality and (3.4) that

(3.10)

By Lemma 2.2 again, we can see that

$\begin{array}{rcl}{J}_{2}& \doteq & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}E{|{X}_{ni}^{\prime }|}^{M}\\ =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}E{|{a}_{ni}{X}_{ni}|}^{M}I\left(|{a}_{ni}{X}_{ni}|\le 1\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}P\left(|{a}_{ni}X|>1\right)+C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}E{|{a}_{ni}X|}^{M}I\left(|{a}_{ni}X|\le 1\right)\\ \doteq & {J}_{3}+{J}_{4}.\end{array}$
(3.11)

${J}_{3}<\mathrm{\infty }$ has been proved by (3.7). In the following, we show that ${J}_{4}<\mathrm{\infty }$. Denote

${I}_{nj}=\left\{i:{\left(nj\right)}^{r}\le 1/{a}_{ni}<{\left[n\left(j+1\right)\right]}^{r}\right\},\phantom{\rule{1em}{0ex}}n\ge 1,j\ge 1.$
(3.12)

It is easily seen that ${I}_{nk}\cap {I}_{nj}=\mathrm{\varnothing }$ for $k\ne j$ and ${\bigcup }_{j=1}^{\mathrm{\infty }}{I}_{nj}=\mathbb{N}$ for all $n\ge 1$. Hence,

$\begin{array}{rcl}{J}_{4}& =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{i=1}^{n}E{|{a}_{ni}X|}^{M}I\left(|{a}_{ni}X|\le 1\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{j=1}^{\mathrm{\infty }}\sum _{i\in {I}_{nj}}E{|{a}_{ni}X|}^{M}I\left(|{a}_{ni}X|\le 1\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{j=1}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left(nj\right)}^{-rM}E{|X|}^{M}I\left(|X|\le {\left[n\left(j+1\right)\right]}^{r}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{j=1}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left(nj\right)}^{-rM}\sum _{k=0}^{n\left(j+1\right)}E{|X|}^{M}I\left(k\le {|X|}^{\frac{1}{r}}
(3.13)

It is easily seen that for all $m\ge 1$, we have that

$\begin{array}{rcl}{n}^{\alpha }& =& \sum _{i=1}^{\mathrm{\infty }}{a}_{ni}=\sum _{j=1}^{\mathrm{\infty }}\sum _{i\in {I}_{nj}}{a}_{ni}\ge \sum _{j=1}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left[n\left(j+1\right)\right]}^{-r}\\ \ge & \sum _{j=m}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left[n\left(j+1\right)\right]}^{-r}\ge \sum _{j=m}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left[n\left(j+1\right)\right]}^{-r}{\left[\frac{n\left(m+1\right)}{n\left(j+1\right)}\right]}^{r\left(M-1\right)}\\ =& \sum _{j=m}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left[n\left(j+1\right)\right]}^{-rM}{\left[n\left(m+1\right)\right]}^{r\left(M-1\right)},\end{array}$

which implies that for all $m\ge 1$,

$\sum _{j=m}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left(nj\right)}^{-rM}\le C{n}^{\alpha }\cdot {n}^{-r\left(M-1\right)}\cdot {m}^{-r\left(M-1\right)}=C{n}^{\alpha -r\left(M-1\right)}\cdot {m}^{-r\left(M-1\right)}.$
(3.14)

Therefore,

(3.15)

and

$\begin{array}{rcl}{J}_{6}& \doteq & C\sum _{n=1}^{\mathrm{\infty }}{n}^{\beta }\sum _{j=1}^{\mathrm{\infty }}\left(\mathrm{♯}{I}_{nj}\right){\left(nj\right)}^{-rM}\sum _{k=2n+1}^{n\left(j+1\right)}E{|X|}^{M}I\left(k\le {|X|}^{\frac{1}{r}}
(3.16)

Thus, the inequality (3.8) follows from (3.9)-(3.11), (3.13), (3.15) and (3.16). This completes the proof of the theorem. □

Theorem 3.2 Let $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ be an array of rowwise $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X and $E{X}_{ni}=0$ for all $i\ge 1$, $n\ge 1$. Let $\left\{{a}_{ni},i\ge 1,n\ge 1\right\}$ be an array of constants such that (3.1) holds and

$\sum _{i=1}^{\mathrm{\infty }}|{a}_{ni}|=O\left(1\right).$
(3.17)

If $E|X|log|X|<\mathrm{\infty }$, then for all $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{ni}|>\epsilon \right)<\mathrm{\infty }.$
(3.18)

Proof We use the same notations as those in Theorem 3.1. According to the proof of Theorem 3.1, we only need to show that (3.7) and (3.8) hold, where $\beta =-1$ and $\alpha =0$.

The fact $E|X|log|X|<\mathrm{\infty }$ yields that

$\begin{array}{rcl}I& \doteq & \sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}{X}_{ni}|>1\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}X|>1\right)\\ \le & C\sum _{k=1}^{\mathrm{\infty }}\sum _{n=1}^{k}{n}^{-1}E|X|I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ \le & C\sum _{k=1}^{\mathrm{\infty }}logkE|X|I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ \le & C\sum _{k=1}^{\mathrm{\infty }}E|X|log|X|I\left({k}^{r}\le |X|<{\left(k+1\right)}^{r}\right)\\ \le & CE|X|log|X|<\mathrm{\infty },\end{array}$

which implies (3.7) for $\beta =-1$.

By Markov’s inequality, Lemmas 2.1 and 2.2, we can get that

$\begin{array}{rcl}J& \doteq & \sum _{n=1}^{\mathrm{\infty }}{n}^{-1}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{ni}^{\prime }-E{X}_{ni}^{\prime }\right)|>\frac{\epsilon }{2}\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}E{|{X}_{ni}^{\prime }|}^{2}\\ =& C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}E{|{a}_{ni}{X}_{ni}|}^{2}I\left(|{a}_{ni}{X}_{ni}|\le 1\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}P\left(|{a}_{ni}X|>1\right)\\ +C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{i=1}^{n}E{|{a}_{ni}X|}^{2}I\left(|{a}_{ni}X|\le 1\right)\\ \le & C+{J}_{5}^{\ast }+{J}_{6}^{\ast }.\end{array}$
(3.19)

Here, ${J}_{5}^{\ast }$ and ${J}_{6}^{\ast }$ are ${J}_{5}$ and ${J}_{6}$ when $M=2$, respectively. Similar to the proof of ${J}_{5}$, we can get that

${J}_{5}^{\ast }\le C+CE|X|<\mathrm{\infty }.$
(3.20)

Similar to the proof of ${J}_{6}$, we have

$\begin{array}{rcl}{J}_{6}^{\ast }& \le & C\sum _{k=2}^{\mathrm{\infty }}\sum _{n=1}^{\left[k/2\right]}{n}^{-1}\cdot {k}^{-r}E{|X|}^{2}I\left(k\le {|X|}^{\frac{1}{r}}
(3.21)

This completes the proof of the theorem from the statements above. □

By Theorems 3.1 and 3.2, we can extend the results of Baum and Katz [4] for independent and identically distributed random variables to the case of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables as follows.

Corollary 3.1 Let $\left\{{X}_{ni},i\ge 1,n\ge 1\right\}$ be an array of rowwise $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X and $E{X}_{ni}=0$ for all $i\ge 1$, $n\ge 1$.

1. (i)

Let $p>1$ and $1\le t<2$. If $E{|X|}^{pt}<\mathrm{\infty }$, then for all $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{p-2}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{ni}|>\epsilon {n}^{\frac{1}{t}}\right)<\mathrm{\infty }.$
(3.22)
2. (ii)

If $E|X|log|X|<\mathrm{\infty }$, then for all $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{ni}|>\epsilon n\right)<\mathrm{\infty }.$
(3.23)

Proof (i) Let ${a}_{ni}=0$ if $i>n$ and ${a}_{ni}={n}^{-1/t}$ if $i\le n$. Hence, conditions (3.1) and (3.2) hold for $r=1/t$ and $\alpha =1-1/t. $\beta \doteq p-2>-1$. It is easy to check that

$1+\alpha +\beta =p-\frac{1}{t}>0,\phantom{\rule{2em}{0ex}}1+\frac{1+\alpha +\beta }{r}=pt\doteq s,\phantom{\rule{2em}{0ex}}\frac{\alpha }{r}+1=t

Therefore, the desired result (3.22) follows from Theorem 3.1 immediately.

1. (ii)

Let ${a}_{ni}=0$ if $i>n$ and ${a}_{ni}={n}^{-1}$ if $i\le n$. Hence, conditions (3.1) and (3.17) hold for $r=-1$. Therefore, the desired result (3.23) follows from Theorem 3.2 immediately. This completes the proof of the corollary. □

Similar to the proofs of Theorems 3.1-3.2 and Corollary 3.1, we can get the following Baum and Katz type result for sequences of $\stackrel{˜}{\rho }$-mixing random variables.

Theorem 3.3 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X and $E{X}_{n}=0$ for $n\ge 1$.

1. (i)

Let $p>1$ and $1\le t<2$. If $E{|X|}^{pt}<\mathrm{\infty }$, then for all $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{p-2}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{i}|>\epsilon {n}^{\frac{1}{t}}\right)<\mathrm{\infty }.$
(3.24)
2. (ii)

If $E|X|log|X|<\mathrm{\infty }$, then for all $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{i}|>\epsilon n\right)<\mathrm{\infty }.$
(3.25)

By Theorem 3.3, we can get the Marcinkiewicz-Zygmund type strong law of large numbers for $\stackrel{˜}{\rho }$-mixing random variables as follows.

Corollary 3.2 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X and $E{X}_{n}=0$ for $n\ge 1$.

1. (i)

Let $p>1$ and $1\le t<2$. If $E{|X|}^{pt}<\mathrm{\infty }$, then

${n}^{-\frac{1}{t}}\sum _{i=1}^{n}{X}_{i}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.,}}n\to \mathrm{\infty }.$
(3.26)
2. (ii)

If $E|X|log|X|<\mathrm{\infty }$, then

$\frac{1}{n}\sum _{i=1}^{n}{X}_{i}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.,}}n\to \mathrm{\infty }.$
(3.27)

Proof (i) By (3.24), we can get that for all $\epsilon >0$,

By Borel-Cantelli lemma, we obtain that

(3.28)

For all positive integers n, there exists a positive integer ${k}_{0}$ such that ${2}^{{k}_{0}-1}\le n<{2}^{{k}_{0}}$. We have by (3.28) that

which implies (3.26).

1. (ii)

Similar to the proof of (i), we can get (ii) immediately. The details are omitted. This completes the proof of the corollary. □

Remark 3.1 We point out that the cases $1+\alpha +\beta >0$ and $1+\alpha +\beta <0$ are considered in Theorem 3.1 and the case $1+\alpha +\beta =0$ is considered in Theorem 3.2, respectively. Theorem 3.1 and Theorem 3.2 consider the complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables, while Theorem 3.3 considers the complete convergence for weighted sums of sequences of $\stackrel{˜}{\rho }$-mixing random variables. In addition, Theorem 3.1 and Theorem 3.2 could be applied to obtain the Baum and Katz type result for arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables, while Theorem 3.3 could be applied to establish the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of $\stackrel{˜}{\rho }$-mixing random variables.

## References

1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33(2):25–31. 10.1073/pnas.33.2.25

2. Erdös P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 1949, 20(2):286–291. 10.1214/aoms/1177730037

3. Spitzer FL: A combinatorial lemma and its application to probability theory. Trans. Am. Math. Soc. 1956, 82(2):323–339. 10.1090/S0002-9947-1956-0079851-X

4. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965, 120(1):108–123. 10.1090/S0002-9947-1965-0198524-1

5. Gut A: Complete convergence for arrays. Period. Math. Hung. 1992, 25(1):51–75. 10.1007/BF02454383

6. Zarei H, Jabbari H: Complete convergence of weighted sums under negative dependence. Stat. Pap. 2011, 52(2):413–418. 10.1007/s00362-009-0238-4

7. Bradley RC: On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 1992, 5: 355–374. 10.1007/BF01046741

8. Bryc W, Smolenski W: Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Am. Math. Soc. 1993, 119(2):629–635. 10.1090/S0002-9939-1993-1149969-7

9. Peligrad M: On the asymptotic normality of sequences of weak dependent random variables. J. Theor. Probab. 1996, 9(3):703–715. 10.1007/BF02214083

10. Peligrad M: Maximum of partial sums and an invariance principle for a class of weak dependent random variables. Proc. Am. Math. Soc. 1998, 126(4):1181–1189. 10.1090/S0002-9939-98-04177-X

11. Peligrad M, Gut A: Almost sure results for a class of dependent random variables. J. Theor. Probab. 1999, 12: 87–104. 10.1023/A:1021744626773

12. Utev S, Peligrad M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 2003, 16(1):101–115. 10.1023/A:1022278404634

13. Gan SX:Almost sure convergence for $\stackrel{˜}{\rho }$-mixing random variable sequences. Stat. Probab. Lett. 2004, 67: 289–298. 10.1016/j.spl.2003.12.011

14. Cai GH:Strong law of large numbers for $\stackrel{˜}{\rho }$-mixing sequences with different distributions. Discrete Dyn. Nat. Soc. 2006., 2006: Article ID 27648

15. Zhu MH:Strong laws of large numbers for arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables. Discrete Dyn. Nat. Soc. 2007., 2007: Article ID 74296

16. Wu QY, Jiang YY:Some strong limit theorems for $\stackrel{˜}{\rho }$-mixing sequences of random variables. Stat. Probab. Lett. 2008, 78(8):1017–1023. 10.1016/j.spl.2007.09.061

17. Wu QY, Jiang YY:Strong limit theorems for weighted product sums of $\stackrel{˜}{\rho }$-mixing sequences of random variables. J. Inequal. Appl. 2009., 2009: Article ID 174768

18. An J, Yuan DM:Complete convergence of weighted sums for $\stackrel{˜}{\rho }$-mixing sequence of random variables. Stat. Probab. Lett. 2008, 78(12):1466–1472. 10.1016/j.spl.2007.12.020

19. Kuczmaszewska A:On Chung-Teicher type strong law of large numbers for $\stackrel{˜}{\rho }$-mixing random variables. Discrete Dyn. Nat. Soc. 2008., 2008: Article ID 140548

20. Sung SH:Complete convergence for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 630608

21. Wang XJ, Hu SH, Shen Y, Yang WZ: Some new results for weakly dependent random variable sequences. Chinese J. Appl. Probab. Statist. 2010, 26(6):637–648.

22. Wang XJ, Xia FX, Ge MM, Hu SH, Yang WZ:Complete consistency of the estimator of nonparametric regression models based on $\stackrel{˜}{\rho }$-mixing sequences. Abstr. Appl. Anal. 2012., 2012: Article ID 907286

23. Wang XJ, Li XQ, Yang WZ, Hu SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 2012, 25: 1916–1920. 10.1016/j.aml.2012.02.069

24. Wu QY: Sufficient and necessary conditions of complete convergence for weighted sums of PNQD random variables. J. Appl. Math. 2012., 2012: Article ID 104390

25. Wu QY: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012., 2012: Article ID 50 10.1186/1029-242X-2012-50

26. Wang XJ, Hu SH, Yang WZ: Convergence properties for asymptotically almost negatively associated sequence. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 218380

27. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2011., 2011: Article ID 717126

28. Baek JI, Choi IB, Niu SL: On the complete convergence of weighted sums for arrays of negatively associated variables. J. Korean Stat. Soc. 2008, 37: 73–80. 10.1016/j.jkss.2007.08.001

29. Wu QY: Probability Limit Theory for Mixing Sequences. Science Press of China, Beijing; 2006.

30. Wang XJ, Hu SH, Yang WZ, Wang XH: On complete convergence of weighted sums for arrays of rowwise asymptotically almost negatively associated random variables. Abstr. Appl. Anal. 2012., 2012: Article ID 315138

Download references

## Acknowledgements

The authors are most grateful to the editor Jewgeni Dshalalow and anonymous referees for careful reading of the manuscript and valuable suggestions which helped in improving an earlier version of this paper. This work was supported by the National Natural Science Foundation of China (11201001, 11171001), the Natural Science Foundation of Anhui Province (1308085QA03, 11040606M12, 1208085QA03), the 211 project of Anhui University, the Youth Science Research Fund of Anhui University, Applied Teaching Model Curriculum of Anhui University (XJYYXKC04) and the Students Science Research Training Program of Anhui University (KYXL2012007, kyxl2013003).

## Author information

Authors

### Corresponding author

Correspondence to Yan Shen.

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

## About this article

### Cite this article

Shen, A., Wu, R., Wang, X. et al. Complete convergence for weighted sums of arrays of rowwise $\stackrel{˜}{\rho }$-mixing random variables. J Inequal Appl 2013, 356 (2013). https://doi.org/10.1186/1029-242X-2013-356

Download citation

• Received:

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2013-356