# On strong law of large numbers and growth rate for a class of random variables

## Abstract

In this paper, we study the strong law of large numbers for a class of random variables satisfying the maximal moment inequality with exponent 2. Our results embrace the Kolmogorov strong law of large numbers and the Marcinkiewicz strong law of large numbers for this class of random variables. In addition, strong growth rate for weighted sums of this class of random variables is presented.

MSC:60F15.

## 1 Introduction

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables defined on a fixed probability space $\left(\mathrm{\Omega },\mathcal{F},P\right)$. ${S}_{n}={\sum }_{i=1}^{n}{X}_{i}$, $n\ge 1$, ${S}_{0}=0$. Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be sequences of constant with $0<{b}_{n}↑\mathrm{\infty }$. Then $\left\{{a}_{n}{X}_{n},n\ge 1\right\}$ is said to obey the general strong law of large numbers (SLLN) with norming constant $\left\{{b}_{n},n\ge 1\right\}$ if the normed weighted sums

$\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}-{\mathit{EX}}_{i}\right)\to 0\phantom{\rule{1em}{0ex}}\text{almost surely (a.s.)}$
(1.1)

holds. Note that the SLLN of the form (1.1) embraces the Kolmogorov SLLN (${b}_{n}=n$, ${a}_{n}=1$) and the Marcinkiewicz SLLN (${b}_{n}={n}^{1/r}$, ${a}_{n}=1$, $r>0$). When ${b}_{n}={\sum }_{i=1}^{n}{a}_{i}$, fundamental results for the SLLN were obtained.

Under an independent assumption, many SLLNs for the weighted sums are obtained. One can refer to Adler and Rosalsky , Chow and Teicher , Fernholz and Teicher , Jamison et al.  and Teicher .

Under a pairwise independent assumption, Rosalsky  obtained some SLLNs for weighted sums of pairwise independent and identically distributed random variables. Sung  obtained sufficient conditions for (1.1) if $\left\{{X}_{n},n\ge 1\right\}$ is a sequence of pairwise independent random variables satisfying ${\int }_{0}^{\mathrm{\infty }}{x}^{p-1}{sup}_{n\ge 1}P\left(|{X}_{n}|>x\right)\phantom{\rule{0.2em}{0ex}}dx<\mathrm{\infty }$. Sung  presented the following result: $\frac{1}{{a}_{n}}{\sum }_{i=1}^{n}\left({X}_{i}-{\mathit{EX}}_{i}I\left(|{X}_{i}|\le {a}_{i}\right)\right)\to 0$ a.s., where $\left\{{a}_{n}\right\}$ is a sequence of positive constants with $\frac{{a}_{n}}{n}↑$ and $\left\{{X}_{n}\right\}$ is a sequence of pairwise independent and identically distributed random variables.

For more details about strong limit theorems for dependent case, one can refer to Wu , Wu and Jiang , Hu et al. , Shen et al. , Zhou et al.  and Zhou , and so forth.

Recently Sung  gave the following definition.

Definition 1.1 (Sung )

A random variable sequence $\left\{{X}_{n},n\ge 1\right\}$ is said to satisfy the maximal moment inequality with exponent 2 if for all $n\ge m\ge 1$, there exists a constant C independent of n and m such that

$E\left(\underset{m\le k\le n}{max}|\sum _{i=m}^{k}{X}_{i}{|}^{2}\right)\le C\sum _{i=m}^{n}{\mathit{EX}}_{i}^{2}.$
(1.2)

We can see that a wide class of mean zero random variables satisfies (1.2). Inspired by Sung [7, 15], we establish SLLN of the form (1.1) for a class of random variables satisfying the maximal moment inequality with exponent 2.

The rest of the paper is organized as follows. In Section 2, some preliminary definition and lemmas are presented. In Section 3, main results and their proofs are provided.

Throughout the paper, let $I\left(A\right)$ be the indicator function of the set A. C denotes a positive constant not depending on n, which may be different in various places. Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be sequences of positive numbers, ${a}_{n}\ll {b}_{n}$ represents that there exists a constant $C>0$ such that ${a}_{n}\le C{b}_{n}$ for all n.

## 2 Preliminaries

The following lemmas and definition will be needed in this paper.

Lemma 2.1 (Sung )

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables and put $G\left(x\right)={sup}_{n\ge 1}P\left(|{X}_{n}|>x\right)$ for $x\ge 0$. Assume that ${\int }_{0}^{\mathrm{\infty }}{x}^{p-1}G\left(x\right)\phantom{\rule{0.2em}{0ex}}dx<\mathrm{\infty }$ for some $1\le p<2$. Then

1. (i)

${\sum }_{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>{n}^{1/p}\right)<\mathrm{\infty }$.

2. (ii)

${\sum }_{n=1}^{\mathrm{\infty }}{\mathit{EX}}_{n}^{2}I\left(|{X}_{n}|\le {n}^{1/p}\right)/{n}^{2/p}<\mathrm{\infty }$.

3. (iii)

${\mathit{EX}}_{n}I\left(|{X}_{n}|>{c}_{n}\right)\to 0$ for any sequence $\left\{{c}_{n},n\ge 1\right\}$ satisfying ${c}_{n}\to \mathrm{\infty }$.

Lemma 2.2 (Sung )

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables satisfying the maximal moment inequality with exponent 2. If ${\sum }_{n=1}^{\mathrm{\infty }}{\mathit{EX}}_{n}^{2}<\mathrm{\infty }$, then ${\sum }_{n=1}^{\mathrm{\infty }}{X}_{n}$ converges almost surely.

Definition 2.3 A random variable sequence $\left\{{X}_{n},n\ge 1\right\}$ is said to be stochastically dominated by a random variable X if there exists a constant C such that

$P\left(|{X}_{n}|>x\right)\le CP\left(|X|>x\right)$

for all $x\ge 0$ and $n\ge 1$.

Lemma 2.4 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables which is stochastically dominated by a random variable X. For any $\alpha >0$ and $b>0$, the following statement holds:

$E|{X}_{n}{|}^{\alpha }I\left(|{X}_{n}|\le b\right)\le C\left\{E|X{|}^{\alpha }I\left(|X|\le b\right)+{b}^{\alpha }P\left(|X|>b\right)\right\},$
(2.1)
$E|{X}_{n}{|}^{\alpha }I\left(|{X}_{n}|>b\right)\le CE|X{|}^{\alpha }I\left(|X|>b\right),$
(2.2)

where C is a positive constant.

Lemma 2.5 (Hu )

Let ${b}_{1},{b}_{2},\dots$ be a nondecreasing unbounded sequence of positive numbers. Let ${\alpha }_{1},{\alpha }_{2},\dots$ be nonnegative numbers, and ${\mathrm{\Lambda }}_{k}={\alpha }_{1}+\cdots +{\alpha }_{k}$ for $k\ge 1$. Let r be a fixed positive number. Assume that for each $n\ge 1$,

$E{\left(\underset{1\le k\le n}{max}|{S}_{k}|\right)}^{r}\le C\sum _{k=1}^{n}{\alpha }_{k}.$
(2.3)

If

$\sum _{l=1}^{\mathrm{\infty }}{\mathrm{\Lambda }}_{l}\left(\frac{1}{{b}_{l}^{r}}-\frac{1}{{b}_{l+1}^{r}}\right)<\mathrm{\infty },$
(2.4)
$\frac{{\mathrm{\Lambda }}_{n}}{{b}_{n}^{r}}\phantom{\rule{1em}{0ex}}\mathit{\text{is bounded}},$
(2.5)

then

$\underset{n\to \mathrm{\infty }}{lim}\frac{{S}_{n}}{{b}_{n}}=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.,}}$
(2.6)

and with the growth rate

$\frac{{S}_{n}}{{b}_{n}}=O\left(\frac{{\beta }_{n}}{{b}_{n}}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.,}}$
(2.7)

where

${\beta }_{n}=\underset{1\le k\le n}{max}{b}_{k}{\nu }_{k}^{\delta /r},\phantom{\rule{1em}{0ex}}\mathrm{\forall }0<\delta <1,{v}_{n}=\sum _{k=n}^{\mathrm{\infty }}\frac{{\alpha }_{k}}{{b}_{k}^{r}},\underset{n\to \mathrm{\infty }}{lim}\frac{{\beta }_{n}}{{b}_{n}}=0.$
(2.8)

And

$E{\left(\underset{1\le l\le n}{max}|\frac{{S}_{l}}{{b}_{l}}|\right)}^{r}\le 4C\sum _{l=1}^{n}\frac{{\alpha }_{l}}{{b}_{l}^{r}}<\mathrm{\infty },$
(2.9)
$E{\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{b}_{l}}|\right)}^{r}\le 4C\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{b}_{l}^{r}}<\mathrm{\infty }.$
(2.10)

If we further assume that ${\alpha }_{n}>0$ for infinitely many n, then

$E{\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{\beta }_{l}}|\right)}^{r}\le 4C\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{\beta }_{l}^{r}}<\mathrm{\infty }.$
(2.11)

Proof It follows from Corollary 2.1.1 of Hu  that (2.6)-(2.8) hold. By (2.3) and Theorem 1.1 of Fazekas and Klesov , we have

$E{\left(\underset{1\le l\le n}{max}|\frac{{S}_{l}}{{b}_{l}}|\right)}^{r}\le 4C\sum _{l=1}^{n}\frac{{\alpha }_{l}}{{b}_{l}^{r}}\le 4C\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{b}_{l}^{r}}<\mathrm{\infty }.$
(2.12)

Therefore

$E{\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{b}_{l}}|\right)}^{r}=\underset{n\to \mathrm{\infty }}{lim}E{\left(\underset{1\le l\le n}{max}|\frac{{S}_{l}}{{b}_{l}}|\right)}^{r}\le 4C\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{b}_{l}^{r}}<\mathrm{\infty },$
(2.13)

following from the monotone convergence theorem of Rao . Equation (2.11) follows from the proof of Lemma 1.2 of Hu and Hu . □

## 3 Main results

Theorem 3.1 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables and put $G\left(x\right)={sup}_{n\ge 1}P\left(|{X}_{n}|>x\right)$ for $x>0$. Denote ${Y}_{n}=-{n}^{1/p}I\left({X}_{n}<-{n}^{1/p}\right)+{X}_{n}I\left(|{X}_{n}|\le {n}^{1/p}\right)+{n}^{1/p}I\left({X}_{n}>{n}^{\frac{1}{p}}\right)$, $n\ge 1$, where p is a positive constant. Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be sequences of positive numbers with ${b}_{n}↑\mathrm{\infty }$. Suppose that $\left\{\frac{{a}_{n}}{{b}_{n}}\left({Y}_{n}-E{Y}_{n}\right),n\ge 1\right\}$ satisfies the maximal moment inequality with exponent 2. Assume that the following two conditions hold:

$\sum _{i=1}^{n}{a}_{i}=O\left({b}_{n}\right),$
(3.1)
(3.2)

If ${\int }_{0}^{\mathrm{\infty }}{x}^{p-1}G\left(x\right)\phantom{\rule{0.2em}{0ex}}dx<\mathrm{\infty }$, then

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}-{\mathit{EX}}_{i}\right)=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(3.3)

Proof By Lemma 2.1(i),

$\sum _{n=1}^{\mathrm{\infty }}P\left({X}_{n}\ne {Y}_{n}\right)=\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>{n}^{1/p}\right)<\mathrm{\infty }.$
(3.4)

Therefore follows from the Borel-Cantelli lemma and (3.4). Thus (3.3) is equivalent to the following:

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({Y}_{i}-{\mathit{EX}}_{i}\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.5)

So, in order to prove (3.3), we need only to prove

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({Y}_{i}-E{Y}_{i}\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.6)

and

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({\mathit{EX}}_{i}-E{Y}_{i}\right)=0.$
(3.7)

Firstly, we prove (3.6). In view of Lemma 2.1(i), (ii) and (3.2), we have

$\begin{array}{rl}\sum _{n=1}^{\mathrm{\infty }}E{\left(\frac{{a}_{n}}{{b}_{n}}\left({Y}_{n}-E{Y}_{n}\right)\right)}^{2}& \le \sum _{n=1}^{\mathrm{\infty }}\frac{{a}_{n}^{2}}{{b}_{n}^{2}}E{Y}_{n}^{2}\\ =\sum _{n=1}^{\mathrm{\infty }}\frac{{a}_{n}^{2}}{{b}_{n}^{2}}\left[{\mathit{EX}}_{n}^{2}I\left(|{X}_{n}|\le {n}^{1/p}\right)+{n}^{2/p}EI\left(|{X}_{n}|>{n}^{1/p}\right)\right]\\ \ll \sum _{n=1}^{\mathrm{\infty }}\left[{n}^{-2/p}{\mathit{EX}}_{n}^{2}I\left(|{X}_{n}|\le {n}^{1/p}\right)+P\left(|{X}_{n}|>{n}^{1/p}\right)\right]\\ <\mathrm{\infty }.\end{array}$

Thus it follows by Lemma 2.2 that

$\sum _{n=1}^{\mathrm{\infty }}\frac{{a}_{n}}{{b}_{n}}\left({Y}_{n}-E{Y}_{n}\right)\phantom{\rule{1em}{0ex}}\text{converges a.s.}$
(3.8)

By Kronecker’s lemma, we can obtain (3.6) immediately.

Secondly, we prove (3.7). By (3.1) and Lemma 2.1(iii), we can get

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}E{X}_{i}I\left(|{X}_{i}|>{i}^{1/p}\right)=0.$
(3.9)

By (3.2) and Lemma 2.1(i), we have

$\sum _{n=1}^{\mathrm{\infty }}\frac{{a}_{n}}{{b}_{n}}{n}^{1/p}P\left({X}_{n}>{n}^{1/p}\right)\ll \sum _{n=1}^{\mathrm{\infty }}P\left({X}_{n}>{n}^{1/p}\right)<\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>{n}^{1/p}\right)<\mathrm{\infty }$

and

$\sum _{n=1}^{\mathrm{\infty }}\frac{{a}_{n}}{{b}_{n}}{n}^{1/p}P\left({X}_{n}<-{n}^{1/p}\right)<\mathrm{\infty }.$

Thus it follows by Kronecker’s lemma that

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}{i}^{1/p}P\left({X}_{i}>{i}^{1/p}\right)=0$
(3.10)

and

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}{i}^{1/p}P\left({X}_{i}<-{i}^{1/p}\right)=0.$
(3.11)

Therefore,

$\begin{array}{r}\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({\mathit{EX}}_{i}-E{Y}_{i}\right)\\ \phantom{\rule{1em}{0ex}}=\underset{n\to \mathrm{\infty }}{lim}\left[\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}E{X}_{i}I\left(|{X}_{i}|>{i}^{1/p}\right)\\ \phantom{\rule{2em}{0ex}}+\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}{i}^{1/p}P\left({X}_{i}<-{i}^{1/p}\right)-\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}{i}^{1/p}P\left({X}_{i}>{i}^{1/p}\right)\right]\\ \phantom{\rule{1em}{0ex}}=0\end{array}$

follows from (3.9)-(3.11). Hence the result is proved. □

Theorem 3.2 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of mean zero random variables, which is stochastically dominated by a random variable X. Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be sequences of positive numbers with ${b}_{n}↑\mathrm{\infty }$. Put ${c}_{n}=\frac{{b}_{n}}{{a}_{n}}$ for $n\ge 1$, $1. Denote ${Y}_{n}=-{c}_{n}I\left({X}_{n}<-{c}_{n}\right)+{X}_{n}I\left(|{X}_{n}|\le {c}_{n}\right)+{c}_{n}I\left({X}_{n}>{c}_{n}\right)$, $n\ge 1$, and suppose that $\left\{\frac{{a}_{n}}{{b}_{n}}\left({Y}_{n}-E{Y}_{n}\right),n\ge 1\right\}$ satisfies the maximal moment inequality with exponent 2. Assume that the following two conditions hold:

$E|X{|}^{r}<\mathrm{\infty },$
(3.12)
$N\left(n\right)=Card\left\{i:{c}_{i}\le n\right\}\ll {n}^{r},\phantom{\rule{1em}{0ex}}n\ge 1.$
(3.13)

Then

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}{X}_{i}=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(3.14)

Proof Let $N\left(0\right)=0$. By (3.13), we can see that ${c}_{n}\to \mathrm{\infty }$ as $n\to \mathrm{\infty }$. By (3.12) and (3.13),

$\begin{array}{rl}\sum _{n=1}^{\mathrm{\infty }}P\left({X}_{n}\ne {Y}_{n}\right)& =\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>{c}_{n}\right)\ll \sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)\\ =\sum _{n=1}^{\mathrm{\infty }}\sum _{{c}_{n}\le j<{c}_{n}+1}P\left(|X|>{c}_{n}\right)\\ \le \sum _{j=1}^{\mathrm{\infty }}\sum _{j-1<{c}_{n}\le j}P\left(|X|>j-1\right)\\ =\sum _{j=1}^{\mathrm{\infty }}\left(N\left(j\right)-N\left(j-1\right)\right)\sum _{k=j}^{\mathrm{\infty }}P\left(k-1<|X|\le k\right)\\ =\sum _{k=1}^{\mathrm{\infty }}P\left(k-1<|X|\le k\right)\sum _{j=1}^{k}\left(N\left(j\right)-N\left(j-1\right)\right)\\ =\sum _{k=1}^{\mathrm{\infty }}N\left(k\right)P\left(k-1<|X|\le k\right)\\ \ll \sum _{k=1}^{\mathrm{\infty }}{k}^{r}P\left(k-1<|X|\le k\right)\\ \ll E|X{|}^{r}<\mathrm{\infty },\end{array}$
(3.15)

which implies from the Borel-Cantelli lemma. So, in order to prove (3.14), we need only to prove

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}{Y}_{i}=0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.16)

By (3.12), (3.13), Lemma 2.4, and the proof of (3.15), we have

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}E{\left(\frac{{a}_{n}}{{b}_{n}}\left({Y}_{n}-E{Y}_{n}\right)\right)}^{2}\\ \phantom{\rule{1em}{0ex}}\le \sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}E{Y}_{n}^{2}\\ \phantom{\rule{1em}{0ex}}=\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}\left[{\mathit{EX}}_{n}^{2}I\left(|{X}_{n}|\le {c}_{n}\right)+E\left({c}_{n}^{2}I\left(|{X}_{n}|>{c}_{n}\right)\right)\right]\\ \phantom{\rule{1em}{0ex}}\ll \sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}{\mathit{EX}}^{2}I\left(|X|\le {c}_{n}\right)+\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)\\ \phantom{\rule{1em}{0ex}}=\sum _{n=1}^{\mathrm{\infty }}\sum _{{c}_{n}\le j<{c}_{n}+1}{c}_{n}^{-2}{\mathit{EX}}^{2}I\left(|X|\le {c}_{n}\right)+\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)\\ \phantom{\rule{1em}{0ex}}\le \sum _{j=1}^{\mathrm{\infty }}\sum _{j-1<{c}_{n}\le j}{c}_{n}^{-2}{\mathit{EX}}^{2}I\left(|X|\le j\right)+C\\ \phantom{\rule{1em}{0ex}}\ll \sum _{j=2}^{\mathrm{\infty }}\left(N\left(j\right)-N\left(j-1\right)\right){\left(j-1\right)}^{-2}\sum _{k=1}^{j}{\mathit{EX}}^{2}I\left(k-1<|X|\le k\right)+C\\ \phantom{\rule{1em}{0ex}}=\sum _{j=2}^{\mathrm{\infty }}\left(N\left(j\right)-N\left(j-1\right)\right){\left(j-1\right)}^{-2}\left[{\mathit{EX}}^{2}I\left(0<|X|\le 1\right)+\sum _{k=2}^{j}{\mathit{EX}}^{2}I\left(k-1<|X|\le k\right)\right]+C\\ \phantom{\rule{1em}{0ex}}=\sum _{j=2}^{\mathrm{\infty }}\left(N\left(j\right)-N\left(j-1\right)\right){\left(j-1\right)}^{-2}{\mathit{EX}}^{2}I\left(0<|X|\le 1\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{k=2}^{\mathrm{\infty }}{\mathit{EX}}^{2}I\left(k-1<|X|\le k\right)\sum _{j=k}^{\mathrm{\infty }}\left(N\left(j\right)-N\left(j-1\right)\right){\left(j-1\right)}^{-2}+C\\ \phantom{\rule{1em}{0ex}}\le \sum _{j=2}^{\mathrm{\infty }}N\left(j\right)\left({\left(j-1\right)}^{-2}-{j}^{-2}\right){\mathit{EX}}^{2}I\left(0<|X|\le 1\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{k=2}^{\mathrm{\infty }}{\mathit{EX}}^{2}I\left(k-1<|X|\le k\right)\sum _{j=k}^{\mathrm{\infty }}N\left(j\right)\left({\left(j-1\right)}^{-2}-{j}^{-2}\right)+C\\ \phantom{\rule{1em}{0ex}}\ll \sum _{j=2}^{\mathrm{\infty }}{j}^{r-3}+\sum _{k=2}^{\mathrm{\infty }}{\mathit{EX}}^{2}I\left(k-1<|X|\le k\right)\sum _{j=k}^{\mathrm{\infty }}{j}^{r-3}+C\\ \phantom{\rule{1em}{0ex}}\ll \sum _{k=2}^{\mathrm{\infty }}{k}^{r-2}E\left(|X{|}^{r}{k}^{2-r}I\left(k-1<|X|\le k\right)\right)+C\\ \phantom{\rule{1em}{0ex}}\ll E|X{|}^{r}+C<\mathrm{\infty }.\end{array}$
(3.17)

Combining Lemma 2.2, (3.17) and Kronecker’s lemma, we can get

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}\left({Y}_{i}-E{Y}_{i}\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.18)

To complete the proof of (3.16), it suffices to show that

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{b}_{n}}\sum _{i=1}^{n}{a}_{i}E{Y}_{i}=0.$
(3.19)

By (3.12), (3.13) and ${\mathit{EX}}_{n}=0$, it follows that

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}|\frac{{a}_{n}}{{b}_{n}}E{Y}_{n}|& \le & \sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-1}\left[E|{X}_{n}|I\left(|{X}_{n}|>{c}_{n}\right)+E\left({c}_{n}I\left(|{X}_{n}|>{c}_{n}\right)\right)\right]\\ =& \sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-1}E|{X}_{n}|I\left(|{X}_{n}|>{c}_{n}\right)+\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>{c}_{n}\right).\end{array}$
(3.20)

Observe that

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-1}E|{X}_{n}|I\left(|{X}_{n}|>{c}_{n}\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-1}E|X|I\left(|X|>{c}_{n}\right)\\ \phantom{\rule{1em}{0ex}}=C\sum _{n=1}^{\mathrm{\infty }}\sum _{{c}_{n}\le j<{c}_{n}+1}{c}_{n}^{-1}E|X|I\left(|X|>{c}_{n}\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{j=1}^{\mathrm{\infty }}\sum _{j-1<{c}_{n}\le j}{c}_{n}^{-1}E|X|I\left(|X|>j-1\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{j=2}^{\mathrm{\infty }}\left(N\left(j\right)-N\left(j-1\right)\right){\left(j-1\right)}^{-1}\sum _{n=j-1}^{\mathrm{\infty }}E|X|I\left(n<|X|\le n+1\right)\\ \phantom{\rule{1em}{0ex}}=C\sum _{n=1}^{\mathrm{\infty }}E|X|I\left(n<|X|\le n+1\right)\sum _{j=2}^{n+1}\left(N\left(j\right)-N\left(j-1\right)\right){\left(j-1\right)}^{-1}\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}E|X|I\left(n<|X|\le n+1\right)\sum _{j=2}^{n}N\left(j\right)\left({\left(j-1\right)}^{-1}-{j}^{-1}\right)\\ \phantom{\rule{2em}{0ex}}+C\sum _{n=1}^{\mathrm{\infty }}E|X|I\left(n<|X|\le n+1\right)\frac{N\left(n+1\right)}{n}\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}E|X|I\left(n<|X|\le n+1\right)\sum _{j=2}^{n}{j}^{r-2}+C\sum _{n=1}^{\mathrm{\infty }}{n}^{r-1}E|X|I\left(n<|X|\le n+1\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{r-1}E|X|I\left(n<|X|\le n+1\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}E|X{|}^{r}I\left(n<|X|\le n+1\right)\\ \phantom{\rule{1em}{0ex}}\le CE|X{|}^{r}<\mathrm{\infty }.\end{array}$
(3.21)

So, we can get

$\sum _{n=1}^{\mathrm{\infty }}|\frac{{a}_{n}}{{b}_{n}}E{Y}_{n}|<\mathrm{\infty }$

from (3.15), (3.20) and (3.21). Consequently,

$\sum _{n=1}^{\mathrm{\infty }}\frac{{a}_{n}}{{b}_{n}}E{Y}_{n}\phantom{\rule{1em}{0ex}}\text{converges},$
(3.22)

which implies (3.19) from Kronecker’s lemma. We complete the proof of theorem. □

Theorem 3.3 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of mean zero random variables satisfying the maximal moment inequality with exponent 2. Denote ${Q}_{n}={max}_{1\le k\le n}{\mathit{EX}}_{k}^{2}$, $n\ge 1$ and ${Q}_{0}=0$. For $1\le p<2$, assume that

$\sum _{n=1}^{\mathrm{\infty }}\frac{{Q}_{n}}{{n}^{2/p}}<\mathrm{\infty }.$
(3.23)

Then

$\underset{n\to \mathrm{\infty }}{lim}\frac{{S}_{n}}{{n}^{1/p}}=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.,}}$
(3.24)

and with the growth rate

$\frac{{S}_{n}}{{n}^{1/p}}=O\left(\frac{{\beta }_{n}}{{n}^{1/p}}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.,}}$
(3.25)

where

$\begin{array}{r}{\beta }_{n}=\underset{1\le k\le n}{max}{k}^{1/p}{\nu }_{k}^{\delta /2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }0<\delta <1,{v}_{n}=\sum _{k=n}^{\mathrm{\infty }}\frac{{\alpha }_{k}}{{k}^{2/p}},\\ {\alpha }_{k}=C\left(k{Q}_{k}-\left(k-1\right){Q}_{k-1}\right),\phantom{\rule{1em}{0ex}}k\ge 1,\underset{n\to \mathrm{\infty }}{lim}\frac{{\beta }_{n}}{{n}^{1/p}}=0.\end{array}$
(3.26)

And

$E\left(\underset{1\le l\le n}{max}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{2}\right)\le 4\sum _{l=1}^{n}\frac{{\alpha }_{l}}{{l}^{2/p}}<\mathrm{\infty },$
(3.27)
$E\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{2}\right)\le 4\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{l}^{2/p}}<\mathrm{\infty }.$
(3.28)

If we further assume that ${\alpha }_{n}>0$ for infinitely many n, then

$E\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{\beta }_{l}}{|}^{2}\right)\le 4\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{\beta }_{l}^{2}}<\mathrm{\infty }.$
(3.29)

In addition, for any $0,

$E\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{r}\right)\le 1+\frac{4r}{2-r}\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{l}^{2/p}}<\mathrm{\infty }.$
(3.30)

Proof Since $\left\{{X}_{n},n\ge 1\right\}$ is a sequence of mean zero random variables satisfying the maximal moment inequality with exponent 2, we have

$E\left(\underset{1\le j\le n}{max}|\sum _{k=1}^{j}{X}_{k}{|}^{2}\right)\le C\sum _{k=1}^{n}{\mathit{EX}}_{k}^{2}\le Cn{Q}_{n}=\sum _{k=1}^{n}{\alpha }_{k}.$
(3.31)

And we can obtain ${\alpha }_{k}\ge 0$ for all $k\ge 1$ from its definition. Denote ${b}_{n}={n}^{1/p}$ and ${\mathrm{\Lambda }}_{n}={\sum }_{k=1}^{n}{\alpha }_{k}$, $n\ge 1$. By (3.23), we can get

$\sum _{l=1}^{\mathrm{\infty }}{\mathrm{\Lambda }}_{l}\left(\frac{1}{{b}_{l}^{2}}-\frac{1}{{b}_{l+1}^{2}}\right)=C\sum _{l=1}^{\mathrm{\infty }}l{Q}_{l}\left(\frac{1}{{l}^{2/p}}-\frac{1}{{\left(l+1\right)}^{2/p}}\right)\le \frac{2C}{p}\sum _{l=1}^{\mathrm{\infty }}\frac{{Q}_{l}}{{l}^{2/p}}<\mathrm{\infty }.$
(3.32)

Thus (2.4) holds. It follows from Remark 2.1 in  that (2.4) implies (2.5). By Lemma 2.5, we can get (3.24)-(3.29) immediately. It follows from (3.28) that

$\begin{array}{rcl}E\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{r}\right)& =& {\int }_{0}^{\mathrm{\infty }}P\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{r}>t\right)\phantom{\rule{0.2em}{0ex}}dt\\ =& {\int }_{0}^{1}P\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{r}>t\right)\phantom{\rule{0.2em}{0ex}}dt+{\int }_{1}^{\mathrm{\infty }}P\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{r}>t\right)\phantom{\rule{0.2em}{0ex}}dt\\ \le & 1+E\left(\underset{l\ge 1}{sup}|\frac{{S}_{l}}{{l}^{1/p}}{|}^{2}\right){\int }_{1}^{\mathrm{\infty }}{t}^{-2/r}\phantom{\rule{0.2em}{0ex}}dt\\ \le & 1+\frac{4r}{2-r}\sum _{l=1}^{\mathrm{\infty }}\frac{{\alpha }_{l}}{{l}^{2/p}}<\mathrm{\infty }.\end{array}$

The proof is completed. □

Remark 3.4 It is easy to see that a wide class of mean zero random variables satisfies the maximal moment inequality with exponent 2. Examples include independent random variables, negatively associated random variables (see Matula ), negatively superadditive dependent random variables (see Shen et al. ), φ-mixing random variables and AANA random variables (see Wang et al. [21, 22]), and $\stackrel{˜}{\rho }$-mixing random variables (see Utev et al. ). So Theorems 3.1-3.3 hold for this wide class of random variables.

## References

1. Adler A, Rosalsky A: On the strong law of large numbers for normed weighted sums of i.i.d. random variables. Stoch. Anal. Appl. 1987, 5: 467–483. 10.1080/07362998708809131

2. Chow YS, Teicher H: Almost certain summability of independent, identically distributed random variables. Ann. Math. Stat. 1971, 42: 401–404. 10.1214/aoms/1177693533

3. Fernholz LT, Teicher H: Stability of random variables and iterated logarithm laws of martingales and quadratic forms. Ann. Probab. 1980, 8: 765–774. 10.1214/aop/1176994664

4. Jamison B, Orey S, Pruitt W: Convergence of weighted averages of independent random variables. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1965, 4: 40–44. 10.1007/BF00535481

5. Teicher H: Almost certain convergence in double arrays. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1985, 69: 331–345. 10.1007/BF00532738

6. Rosalsky A: Strong stability of normed weighted sums of pairwise i.i.d. random variables. Bull. Inst. Math. Acad. Sin. 1987, 15: 203–219.

7. Sung SH: Strong law of large numbers for weighted sums of pairwise independent random variables. Bull. Inst. Math. Acad. Sin. 1999, 27(1):23–28.

8. Sung SH: On the strong law of large numbers for pairwise i.i.d. random variables with general moment conditions. Stat. Probab. Lett. 2013, 83: 1963–1968. 10.1016/j.spl.2013.05.009

9. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J. Inequal. Appl. 2010., 2010: Article ID 383805

10. Wu QY, Jiang YY:Some strong limit theorems for weighted product sums of $\stackrel{˜}{\phi }$-mixing sequences of random variables. J. Inequal. Appl. 2009., 2009: Article ID 174768

11. Hu SH, Wang XJ, Yang WZ, Zhao T: The Hàjek-Rènyi-type inequality for associated random variables. Stat. Probab. Lett. 2009, 79: 884–888. 10.1016/j.spl.2008.11.014

12. Shen Y, Wang XJ, Yang WZ, Hu SH: Almost sure convergence theorem and strong stability for weighted sums of NSD random variables. Acta Math. Sin. Engl. Ser. 2013, 29: 743–756. 10.1007/s10114-012-1723-6

13. Zhou XC, Tan CC, Lin JG:On the strong laws for weighted sums of ${\rho }^{\ast }$-mixing random variables. J. Inequal. Appl. 2011., 2011: Article ID 157816

14. Zhou XC: Complete moment convergence of moving average processes under φ -mixing assumptions. Stat. Probab. Lett. 2010, 80: 285–292. 10.1016/j.spl.2009.10.018

15. Sung SH: On the strong law of large numbers for weighted sums of random variables. Comput. Math. Appl. 2011, 62: 4277–4287. 10.1016/j.camwa.2011.10.018

16. Hu SH: Some new results for the strong law of large numbers. Acta Math. Sin. (Chin. Ser.) 2003, 46(6):1123–1134. (Chinese)

17. Fazekas I, Klesov O: A general approach to the strong law of large numbers. Theory Probab. Appl. 2002, 45(3):436–449.

18. Rao MM: Measure Theory and Integration. Wiley, New York; 1987.

19. Hu SH, Hu M: A general approach rate to the strong law of large numbers. Stat. Probab. Lett. 2006, 76: 843–851. 10.1016/j.spl.2005.10.016

20. Matula P: A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 1992, 12: 209–213.

21. Wang XJ, Hu SH, Shen Y, Yang WZ: Moment inequality for φ -mixing sequences and its applications. J. Inequal. Appl. 2011., 2009: Article ID 379743

22. Wang XJ, Hu SH, Yang WZ: Convergence properties for asymptotically almost negatively associated sequence. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 218380

23. Utev S, Peligrad M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 2003, 16: 101–115. 10.1023/A:1022278404634

## Acknowledgements

The authors are most grateful to the editor and the anonymous referee for their careful reading and insightful comments. This work is supported by the National Natural Science Foundation of China (11171001, 11201001), Natural Science Foundation of Anhui Province (1208085QA03), Humanities and Social Sciences Project from Ministry of Education of China (12YJC91007), Key Program of Research and Development Foundation of Hefei University (13KY05ZD) and Doctoral Research Start-up Funds Projects of Anhui University.

## Author information

Authors

### Corresponding author

Correspondence to Shuhe Hu.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Shen, Y., Yang, J. & Hu, S. On strong law of large numbers and growth rate for a class of random variables. J Inequal Appl 2013, 563 (2013). https://doi.org/10.1186/1029-242X-2013-563 