# Complete moment convergence for randomly weighted sums of martingale differences

## Abstract

In this article, we obtain the complete moment convergence for randomly weighted sums of martingale differences. Our results generalize the corresponding ones for the nonweighted sums of martingale differences to the case of randomly weighted sums of martingale differences.

MSC:60G50, 60F15.

## 1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins , i.e., a sequence of random variables $\left\{{X}_{n},n\ge 1\right\}$ is said to converge completely to a constant C if ${\sum }_{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}-C|\ge \epsilon \right)<\mathrm{\infty }$ for all $\epsilon >0$. In view of Borel-Cantelli lemma, this implies that ${X}_{n}\to C$ almost surely (a.s.). The converse is true if $\left\{{X}_{n},n\ge 1\right\}$ is independent. Hsu and Robbins  obtained that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös  proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory, and it has been generalized and extended in several directions by many authors. Baum and Katz  gave the following generalization to establish a rate of convergence in the sense of Marcinkiewicz-Zygmund-type strong law of large numbers.

Theorem 1.1 Let $\alpha >1/2$, $\alpha p>1$ and $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of i.i.d. random variables. Assume that $E{X}_{1}=0$ if $\alpha \le 1$. Then the following statements are equivalent

1. (i)

$E{|{X}_{1}|}^{p}<\mathrm{\infty }$;

2. (ii)

${\sum }_{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}P\left({max}_{1\le k\le n}|{\sum }_{i=1}^{k}{X}_{i}|>\epsilon {n}^{\alpha }\right)<\mathrm{\infty }$ for all $\epsilon >0$.

Many authors have extended Theorem 1.1 for the i.i.d. case to some dependent cases. For example, Shao  investigated the moment inequalities for the φ-mixing random variables and gave its application to the complete convergence for this stochastic process; Yu  obtained the complete convergence for weighted sums of martingale differences; Ghosal and Chandra  gave the complete convergence of martingale arrays; Stoica [7, 8] investigated the Baum-Katz-Nagaev-type results for martingale differences and the rate of convergence in the strong law of large numbers for martingale differences; Wang et al.  also studied the complete convergence and complete moment convergence for martingale differences, which generalized some results of Stoica [7, 8]; Yang et al.  obtained the complete convergence for the moving average process of martingale differences and so forth. For other works about convergence analysis, one can refer to Gut , Chen et al. , Sung , Sung and Volodin , Hu et al.  and the references therein.

Recently, Thanh and Yin  studied the complete convergence for randomly weighted sums of independent random elements in Banach spaces. On the other hand, Cabrera et al.  investigated some theorems on conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables. Inspired by the papers above, we will investigate the complete moment convergence for randomly weighted sums of martingale differences in this paper, which implies the complete convergence and Marcinkiewicz-Zygmund-type strong law of large numbers for this stochastic process. We generalize the results of Stoica [7, 8] and Wang et al.  for the nonweighted sums of martingale differences to the case of randomly weighted sums of martingale differences. For the details, one can refer to the main results presented in Section 2. The proofs of the main results are presented in Section 3.

Recall that the sequence $\left\{{X}_{n},n\ge 1\right\}$ is stochastically dominated by a nonnegative random variable X if

Throughout the paper, let ${\mathcal{F}}_{0}=\left\{\mathrm{\varnothing },\mathrm{\Omega }\right\}$, ${x}^{+}=xI$ ($x\ge 0$), $I\left(B\right)$ be the indicator function of set B and $C,{C}_{1},{C}_{2},\dots$ denote some positive constants not depending on n, which may be different in various places.

To prove the main results of the paper, we need the following lemmas.

Lemma 1.1 (cf. Hall and Heyde , Theorem 2.11)

If $\left\{{X}_{i},{\mathcal{F}}_{i},1\le i\le n\right\}$ is a martingale difference and $p>0$, then there exists a constant C depending only on p such that

$E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{X}_{i}{|}^{p}\right)\le C\left\{E{\left(\sum _{i=1}^{n}E\left({X}_{i}^{2}|{\mathcal{F}}_{i-1}\right)\right)}^{p/2}+E\left(\underset{1\le i\le n}{max}{|{X}_{i}|}^{p}\right)\right\},\phantom{\rule{1em}{0ex}}n\ge 1.$

Lemma 1.2 (cf. Sung , Lemma 2.4)

Let $\left\{{X}_{n},n\ge 1\right\}$ and $\left\{{Y}_{n},n\ge 1\right\}$ be sequences of random variables. Then for any $n\ge 1$, $q>1$, $\epsilon >0$ and $a>0$,

$\begin{array}{rcl}E{\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}\left({X}_{i}+{Y}_{i}\right)|-\epsilon a\right)}^{+}& \le & \left(\frac{1}{{\epsilon }^{q}}+\frac{1}{q-1}\right)\frac{1}{{a}^{q-1}}E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{i}{|}^{q}\right)\\ +E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{Y}_{i}|\right).\end{array}$

Lemma 1.3 (cf. Wang et al. , Lemma 2.2)

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables stochastically dominated by a nonnegative random variable X. Then for any $n\ge 1$, $a>0$ and $b>0$, the following two statements hold

$E\left[{|{X}_{n}|}^{a}I\left(|{X}_{n}|\le b\right)\right]\le {C}_{1}\left\{E\left[{X}^{a}I\left(X\le b\right)\right]+{b}^{a}P\left(X>b\right)\right\}$

and

$E\left[{|{X}_{n}|}^{a}I\left(|{X}_{n}|>b\right)\right]\le {C}_{2}E\left[{X}^{a}I\left(X>b\right)\right],$

where ${C}_{1}$ and ${C}_{2}$ are positive constants.

## 2 Main results

Theorem 2.1 Let $\alpha >1/2$, $1, $1\le \alpha p<2$ and $\left\{{X}_{n},{\mathcal{F}}_{n},n\ge 1\right\}$ be a martingale difference sequence stochastically dominated by a nonnegative random variable X with $E{X}^{p}<\mathrm{\infty }$. Assume that $\left\{{A}_{n},n\ge 1\right\}$ is a random sequence, and it is independent of $\left\{{X}_{n},n\ge 1\right\}$. If

$\sum _{i=1}^{n}E{A}_{i}^{2}=O\left(n\right),$
(2.1)

then for every $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E{\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }\right)}^{+}<\mathrm{\infty }$
(2.2)

and for $\alpha p>1$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}E{\left(\underset{k\ge n}{sup}|\frac{{\sum }_{i=1}^{k}{A}_{i}{X}_{i}}{{k}^{\alpha }}|-\epsilon \right)}^{+}<\mathrm{\infty }.$
(2.3)

Theorem 2.2 Let $\alpha >1/2$, $p\ge 2$ and $\left\{{X}_{n},{\mathcal{F}}_{n},n\ge 1\right\}$ be a martingale difference sequence stochastically dominated by a nonnegative random variable X with $E{X}^{p}<\mathrm{\infty }$. Let $\left\{{A}_{n},n\ge 1\right\}$ be a random sequence, which is independent of $\left\{{X}_{n},n\ge 1\right\}$. Denote ${\mathcal{G}}_{0}=\left\{\mathrm{\varnothing },\mathrm{\Omega }\right\}$ and ${\mathcal{G}}_{n}=\sigma \left({X}_{1},\dots ,{X}_{n}\right)$, $n\ge 1$. For some $q>\frac{2\left(\alpha p-1\right)}{2\alpha -1}$, we assume that $E{\left[{sup}_{n\ge 1}E\left({X}_{n}^{2}|{\mathcal{G}}_{n-1}\right)\right]}^{q/2}<\mathrm{\infty }$ and

$\sum _{i=1}^{n}E{|{A}_{i}|}^{q}=O\left(n\right).$
(2.4)

Then for every $\epsilon >0$, (2.2) and (2.3) hold.

Meanwhile, for the case $p=1$, we have the following theorem.

Theorem 2.3 Let $\alpha >0$ and $\left\{{X}_{n},{\mathcal{F}}_{n},n\ge 1\right\}$ be a martingale difference sequence stochastically dominated by a nonnegative random variable X with $E\left[Xln\left(1+X\right)\right]<\mathrm{\infty }$. Assume that (2.1) holds and $\left\{{A}_{n},n\ge 1\right\}$ is a random sequence, which is independent of $\left\{{X}_{n},n\ge 1\right\}$. Then for every $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{-2}E{\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }\right)}^{+}<\mathrm{\infty }$
(2.5)

and for $\alpha >1$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -2}E{\left(\underset{k\ge n}{sup}|\frac{{\sum }_{i=1}^{k}{A}_{i}{X}_{i}}{{k}^{\alpha }}|-\epsilon \right)}^{+}<\mathrm{\infty }.$
(2.6)

In particular, for $\alpha >0$, it has

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -2}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|>\epsilon {n}^{\alpha }\right)<\mathrm{\infty },$
(2.7)

and for $\alpha >1$, it has

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -2}P\left(\underset{k\ge n}{sup}|\frac{{\sum }_{i=1}^{k}{A}_{i}{X}_{i}}{{k}^{\alpha }}|>\epsilon \right)<\mathrm{\infty }.$
(2.8)

On the other hand, for $\alpha \ge 1$ and $EX<\mathrm{\infty }$, we have the following theorem.

Theorem 2.4 Let $\alpha \ge 1$ and $\left\{{X}_{n},{\mathcal{F}}_{n},n\ge 1\right\}$ be a martingale difference sequence stochastically dominated by a nonnegative random variable X with $EX<\mathrm{\infty }$. Denote ${\mathcal{G}}_{0}=\left\{\mathrm{\varnothing },\mathrm{\Omega }\right\}$ and ${\mathcal{G}}_{n}=\sigma \left({X}_{1},\dots ,{X}_{n}\right)$, $n\ge 1$. Let (2.1) hold, and let $\left\{{A}_{n},n\ge 1\right\}$ be a random sequence, which is independent of $\left\{{X}_{n},n\ge 1\right\}$. We assume (i) under the case of $\alpha =1$, there exists a $\delta >0$ such that

$\underset{n\to \mathrm{\infty }}{lim}\frac{{max}_{1\le i\le n}E\left[{|{X}_{i}|}^{1+\delta }|{\mathcal{G}}_{i-1}\right]}{{n}^{\delta }}=0,\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$

and (ii) under the case of $\alpha >1$, it has for any $\lambda >0$ that

$\underset{n\to \mathrm{\infty }}{lim}\frac{{max}_{1\le i\le n}E\left[|{X}_{i}||{\mathcal{G}}_{i-1}\right]}{{n}^{\lambda }}=0,\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$

Then for $\alpha \ge 1$ and every $\epsilon >0$, it has (2.7). In addition, for $\alpha >1$, it has (2.8).

Remark 2.1 If the conditions of Theorem 2.1 or Theorem 2.2 hold, then for every $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|>\epsilon {n}^{\alpha }\right)<\mathrm{\infty },$
(2.9)

and for $\alpha p>1$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}P\left(\underset{k\ge n}{sup}|\frac{{\sum }_{i=1}^{k}{A}_{i}{X}_{i}}{{k}^{\alpha }}|>\epsilon \right)<\mathrm{\infty }.$
(2.10)

In fact, it can be checked that for every $\epsilon >0$,

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E{\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }\right)}^{+}\\ \phantom{\rule{1em}{0ex}}=\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }{\int }_{0}^{\mathrm{\infty }}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }>t\right)\phantom{\rule{0.2em}{0ex}}dt\\ \phantom{\rule{1em}{0ex}}\ge \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }{\int }_{0}^{\epsilon {n}^{\alpha }}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }>t\right)\phantom{\rule{0.2em}{0ex}}dt\\ \phantom{\rule{1em}{0ex}}\ge \epsilon \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|>2\epsilon {n}^{\alpha }\right).\end{array}$
(2.11)

So (2.2) implies (2.9).

On the other hand, by the proof of Theorem 12.1 of Gut  and the proof of (3.2) in Yang et al. , for $\alpha p>1$, it is easy to see that

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}P\left(\underset{k\ge n}{sup}|\frac{{\sum }_{i=1}^{k}{A}_{i}{X}_{i}}{{k}^{\alpha }}|>{2}^{2\alpha }\epsilon \right)\le {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|>\epsilon {n}^{\alpha }\right).$

Thus (2.10) follows from (2.9).

Remark 2.2 In Theorem 2.1, if $\alpha =1/p$, then for every $\epsilon >0$, we get by (2.9) that

$\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|>\epsilon {n}^{1/p}\right)<\mathrm{\infty }.$
(2.12)

By using (2.12), one can easily get the Marcinkiewicz-Zygmund-type strong law of large numbers of randomly weighted sums of martingale difference as following

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{n}^{1/p}}\sum _{i=1}^{n}{A}_{i}{X}_{i}=0,\phantom{\rule{1em}{0ex}}\text{a.s.}$

If ${A}_{n}={a}_{n}$ is non-random (the case of constant weighted), $n\ge 1$, then one can get the results of Theorems 2.1-2.4 for the non-random weighted sums of martingale differences.

Meanwhile, it can be seen that our condition $E{\left[{sup}_{n\ge 1}E\left({X}_{n}^{2}|{\mathcal{G}}_{n-1}\right)\right]}^{q/2}<\mathrm{\infty }$ in Theorem 2.2 is weaker than the condition ${sup}_{n\ge 1}E\left({X}_{n}^{2}|{\mathcal{F}}_{n-1}\right)\le C$, a.s. in Theorem 1.4, Theorem 1.5 and Theorem 1.7 of Wang et al. . In fact, it follows from ${\mathcal{G}}_{n-1}\subseteq {\mathcal{F}}_{n-1}$ that

$E\left({X}_{n}^{2}|{\mathcal{G}}_{n-1}\right)=E\left[E\left({X}_{n}^{2}|{\mathcal{F}}_{n-1}\right)|{\mathcal{G}}_{n-1}\right]\le E\left[\underset{n\ge 1}{sup}E\left({X}_{n}^{2}|{\mathcal{F}}_{n-1}\right)|{\mathcal{G}}_{n-1}\right].$

If ${sup}_{n\ge 1}E\left({X}_{n}^{2}|{\mathcal{F}}_{n-1}\right)\le C$, a.s., then it has $E{\left[{sup}_{n\ge 1}E\left({X}_{n}^{2}|{\mathcal{G}}_{n-1}\right)\right]}^{q/2}<\mathrm{\infty }$. For $\alpha \ge 1$ and $E\left[Xln\left(1+X\right)\right]<\mathrm{\infty }$, Wang et al.  obtained the result of (2.7) (see Theorem 1.6 of Wang et al. ). Therefore, by Theorems 2.1-2.4 in this paper, we generalize Theorems 1.4-1.7 of Wang et al.  for the nonweighted sums of martingale differences to the case of randomly weighted sums of martingale differences.

On the other hand, let the hypothesis that $\left\{{A}_{n},n\ge 1\right\}$ is independent of $\left\{{X}_{n},n\ge 1\right\}$ be replaced by that ${A}_{n}$ is ${\mathcal{F}}_{n-1}$-measurable and ${A}_{n}$ is independent of ${X}_{n}$ for each $n\ge 1$ in Theorem 2.1, and the other conditions of Theorem 2.1 hold, one can get (2.2) and (2.3) (the proof is similar to the one of Theorem 2.1). Let ${A}_{n}$ be ${\mathcal{F}}_{n-1}$-measurable, ${A}_{n}$ be independent of ${X}_{n}$ for each $n\ge 1$, $E{\left[{sup}_{n\ge 1}E\left({X}_{n}^{2}|{\mathcal{F}}_{n-1}\right)\right]}^{q/2}<\mathrm{\infty }$ and other conditions of Theorem 2.2 hold, one can also obtain (2.2) and (2.3). We can obtain some similar results if we only require ${A}_{n}$ is ${\mathcal{F}}_{n-1}$-measurable for all $n\ge 1$ (without any independence hypothesis). This case would have many interesting applications (see Huang and Guo , Thanh et al.  and the references therein).

## 3 The proofs of main results

Proof of Theorem 2.1 Let ${\mathcal{G}}_{0}=\left\{\mathrm{\varnothing },\mathrm{\Omega }\right\}$, for $n\ge 1$, ${\mathcal{G}}_{n}=\sigma \left({X}_{1},\dots ,{X}_{n}\right)$ and

${X}_{ni}={X}_{i}I\left(|{X}_{i}|\le {n}^{\alpha }\right),\phantom{\rule{1em}{0ex}}1\le i\le n.$

It can be seen that

${A}_{i}{X}_{i}={A}_{i}{X}_{i}I\left(|{X}_{i}|>{n}^{\alpha }\right)+\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]+E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right),\phantom{\rule{1em}{0ex}}1\le i\le n.$

So, by Lemma 1.2 with $a={n}^{\alpha }$, for $q>1$, one has that

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E{\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }\right)}^{+}\\ \phantom{\rule{1em}{0ex}}\le {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]{|}^{q}\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{i}I\left(|{X}_{i}|>{n}^{\alpha }\right)+E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]|\right)\\ \phantom{\rule{1em}{0ex}}\le {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]{|}^{q}\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}I\left(|{X}_{i}|>{n}^{\alpha }\right)|\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)|\right)\\ \phantom{\rule{1em}{0ex}}:={H}_{1}+{H}_{2}+{H}_{3}.\end{array}$
(3.1)

Obviously, it follows from Hölder’s inequality and (2.1) that

$\sum _{i=1}^{n}E|{A}_{i}|\le {\left(\sum _{i=1}^{n}E{A}_{i}^{2}\right)}^{1/2}{\left(\sum _{i=1}^{n}1\right)}^{1/2}=O\left(n\right).$
(3.2)

By the fact that $\left\{{A}_{n},n\ge 1\right\}$ is independent of $\left\{{X}_{n},n\ge 1\right\}$, we can check by Markov’s inequality, Lemma 1.3, (3.2) and $E{X}^{p}<\mathrm{\infty }$ ($p>1$) that

$\begin{array}{rcl}{H}_{2}& \le & \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }\sum _{i=1}^{n}E|{A}_{i}|E\left[|{X}_{i}|I\left(|{X}_{i}|>{n}^{\alpha }\right)\right]\\ \le & \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-\alpha }E\left[XI\left(X>{n}^{\alpha }\right)\right]\\ =& \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-\alpha }\sum _{m=n}^{\mathrm{\infty }}E\left[XI\left({m}^{\alpha }
(3.3)

On the other hand, one can see that $\left\{{X}_{n},{\mathcal{G}}_{n},n\ge 1\right\}$ is also a martingale difference, since $\left\{{X}_{n},{\mathcal{F}}_{n},n\ge 1\right\}$ is a martingale difference. Combining with the fact that $\left\{{A}_{n},n\ge 1\right\}$ is independent of $\left\{{X}_{n},n\ge 1\right\}$, we have that

$\begin{array}{rcl}E\left({A}_{n}{X}_{n}|{\mathcal{G}}_{n-1}\right)& =& E\left[E\left({A}_{n}{X}_{n}|{\mathcal{G}}_{n}\right)|{\mathcal{G}}_{n-1}\right]=E\left[{X}_{n}E\left({A}_{n}|{\mathcal{G}}_{n}\right)|{\mathcal{G}}_{n-1}\right]\\ =& E{A}_{n}E\left[{X}_{n}|{\mathcal{G}}_{n-1}\right]=0,\phantom{\rule{1em}{0ex}}\text{a.s.},\phantom{\rule{0.25em}{0ex}}n\ge 1.\end{array}$

Consequently, by the proof of (3.3), it follows that

$\begin{array}{rcl}{H}_{3}& =& \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left[{A}_{i}{X}_{i}I\left(|{X}_{i}|\le {n}^{\alpha }\right)|{\mathcal{G}}_{i-1}\right]|\right)\\ =& \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left[{A}_{i}{X}_{i}I\left(|{X}_{i}|>{n}^{\alpha }\right)|{\mathcal{G}}_{i-1}\right]|\right)\\ \le & \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }\sum _{i=1}^{n}E|{A}_{i}|E\left[|{X}_{i}|I\left(|{X}_{i}|>{n}^{\alpha }\right)\right]\\ \le & {C}_{4}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-\alpha }E\left[XI\left(X>{n}^{\alpha }\right)\right]\le {C}_{5}E{X}^{p}<\mathrm{\infty }.\end{array}$
(3.4)

Next, we turn to prove ${H}_{1}<\mathrm{\infty }$. It can be found that for fixed real numbers ${a}_{1},\dots ,{a}_{n}$,

$\left\{{a}_{i}{X}_{ni}-E\left({a}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right),{\mathcal{G}}_{i},1\le i\le n\right\}$

is also a martingale difference. Note that $\left\{{A}_{1},{A}_{2},\dots ,{A}_{n}\right\}$ is independent of $\left\{{X}_{n1},{X}_{n2},\dots ,{X}_{nn}\right\}$. So, by Markov’s inequality, (2.1), (3.1) with $q=2$, Lemma 1.1 with $p=2$ and Lemma 1.3, we get that

$\begin{array}{rcl}{H}_{1}& =& {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-2\alpha }E\left\{E\underset{1\le k\le n}{max}\sum _{i=1}^{k}{\left[{a}_{i}{X}_{ni}-E\left({a}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]}^{2}|{A}_{1}={a}_{1},\dots ,{A}_{n}={a}_{n}\right\}\\ \le & {C}_{2}E\left(\sum _{i=1}^{n}E{\left({a}_{i}{X}_{ni}\right)}^{2}|{A}_{1}={a}_{1},\dots ,{A}_{n}={a}_{n}\right)\\ =& {C}_{2}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-2\alpha }\sum _{i=1}^{n}E{\left({A}_{i}{X}_{ni}\right)}^{2}={C}_{2}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-2\alpha }\sum _{i=1}^{n}E{A}_{i}^{2}E{X}_{ni}^{2}\\ \le & {C}_{3}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-2\alpha }E\left[{X}^{2}I\left(X\le {n}^{\alpha }\right)\right]+{C}_{4}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1}P\left(X>{n}^{\alpha }\right)\\ =:& {C}_{3}{H}_{11}+{C}_{4}{H}_{12}.\end{array}$
(3.5)

By the condition $E{X}^{p}<\mathrm{\infty }$ with $p<2$, it follows

$\begin{array}{rcl}{H}_{11}& =& \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-2\alpha }\sum _{i=1}^{n}E\left[{X}^{2}I\left({\left(i-1\right)}^{\alpha }
(3.6)

From (3.3), it has

${H}_{12}\le \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-\alpha }E\left[XI\left(X>{n}^{\alpha }\right)\right]\le CE{X}^{p}<\mathrm{\infty }.$
(3.7)

Consequently, by (3.1) and (3.3)-(3.7), we obtain (2.2) immediately.

For $\alpha p>1$, we turn to prove (2.3). Denote ${S}_{k}={\sum }_{i=1}^{k}{A}_{i}{X}_{i}$, $k\ge 1$. It can be seen that $\alpha p<2<2+\alpha$. So, similar to the proof of (3.4) in Yang et al. , we can check that

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2}E{\left(\underset{k\ge n}{sup}|\frac{{S}_{k}}{{k}^{\alpha }}|-\epsilon {2}^{2\alpha }\right)}^{+}& \le & {C}_{1}\sum _{l=1}^{\mathrm{\infty }}{2}^{l\left(\alpha p-1-\alpha \right)}{\int }_{0}^{\mathrm{\infty }}P\left(\underset{1\le k\le {2}^{l}}{max}|{S}_{k}|>\epsilon {2}^{\alpha \left(l+1\right)}+s\right)\phantom{\rule{0.2em}{0ex}}ds\\ \le & {C}_{1}{2}^{2+\alpha -\alpha p}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-\alpha }E{\left(\underset{1\le k\le n}{max}|{S}_{k}|-\epsilon {n}^{\alpha }\right)}^{+}.\end{array}$

Combining with (2.2), we get (2.3) finally. □

Proof of Theorem 2.2 To prove Theorem 2.2, we use the same notation as that in the proof of Theorem 2.1. For $p\ge 2$, it is easy to see that $q>2\left(\alpha p-1\right)/\left(2\alpha -1\right)\ge 2$. Consequently, for any $1\le s\le 2$, by Hölder’s inequality and (2.4), we get

$\sum _{i=1}^{n}E{|{A}_{i}|}^{s}\le {\left(\sum _{i=1}^{n}E{|{A}_{i}|}^{q}\right)}^{s/q}{\left(\sum _{i=1}^{n}1\right)}^{1-s/q}=O\left(n\right).$
(3.8)

By (3.1), (3.3) and (3.4), one can find that ${H}_{2}<\mathrm{\infty }$ and ${H}_{3}<\mathrm{\infty }$. So we need to prove that ${H}_{1}<\mathrm{\infty }$ under the conditions of Theorem 2.2. For $p\ge 2$, noting that $\left\{{A}_{1},{A}_{2},\dots ,{A}_{n}\right\}$ is independent of $\left\{{X}_{n1},{X}_{n2},\dots ,{X}_{nn}\right\}$, similar to the proof of (3.5), one has by Lemma 1.1 that

$\begin{array}{rcl}{H}_{1}& =& {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]{|}^{q}\right)\\ \le & {C}_{2}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }E{\left(\sum _{i=1}^{n}E\left\{{\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]}^{2}|{\mathcal{G}}_{i-1}\right\}\right)}^{q/2}\\ +{C}_{3}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }\sum _{i=1}^{n}E{|{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)|}^{q}\\ =:& {C}_{2}{H}_{11}+{C}_{3}{H}_{12}.\end{array}$
(3.9)

Obviously, for $1\le i\le n$, it has

$\begin{array}{r}E\left\{{\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]}^{2}|{\mathcal{G}}_{i-1}\right\}\\ \phantom{\rule{1em}{0ex}}=E\left[{A}_{i}^{2}{X}_{i}^{2}I\left(|{X}_{i}|\le {n}^{\alpha }\right)|{\mathcal{G}}_{i-1}\right]-{\left[E\left({A}_{i}{X}_{i}I\left(|{X}_{i}|\le {n}^{\alpha }\right)|{\mathcal{G}}_{i-1}\right)\right]}^{2}\\ \phantom{\rule{1em}{0ex}}\le E\left[{A}_{i}^{2}{X}_{i}^{2}I\left(|{X}_{i}|\le {n}^{\alpha }\right)|{\mathcal{G}}_{i-1}\right]\le E{A}_{i}^{2}E\left({X}_{i}^{2}|{\mathcal{G}}_{i-1}\right),\phantom{\rule{1em}{0ex}}\text{a.s.}\end{array}$

Combining (3.8) with $E{\left[{sup}_{i\ge 1}E\left({X}_{i}^{2}|{\mathcal{G}}_{i-1}\right)\right]}^{q/2}<\mathrm{\infty }$, we obtain that

$\begin{array}{rcl}{H}_{11}& \le & \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }{\left(\sum _{i=1}^{n}E{A}_{i}^{2}\right)}^{q/2}E{\left(\underset{i\ge 1}{sup}E\left({X}_{i}^{2}|{\mathcal{G}}_{i-1}\right)\right)}^{q/2}\\ \le & {C}_{4}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha +q/2}<\mathrm{\infty },\end{array}$
(3.10)

following from the fact that $q>2\left(\alpha p-1\right)/\left(2\alpha -1\right)$. Meanwhile, by ${C}_{r}$ inequality, Lemma 1.3 and (2.4),

$\begin{array}{rcl}{H}_{12}& \le & {C}_{5}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-2-q\alpha }\sum _{i=1}^{n}E{|{A}_{i}|}^{q}E\left[{|{X}_{i}|}^{q}I\left(|{X}_{i}|\le {n}^{\alpha }\right)\right]\\ \le & {C}_{6}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-q\alpha }E\left[{X}^{q}I\left(X\le {n}^{\alpha }\right)\right]+{C}_{7}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1}P\left(X>{n}^{\alpha }\right)\\ \le & {C}_{6}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-q\alpha }E\left[{X}^{q}I\left(X\le {n}^{\alpha }\right)\right]+{C}_{7}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-\alpha }E\left[XI\left(X>{n}^{\alpha }\right)\right]\\ =:& {C}_{6}{H}_{11}^{\ast }+{C}_{7}{H}_{12}^{\ast }.\end{array}$
(3.11)

By the condition $p\ge 2$ and $\alpha >1/2$, we have that $2\left(\alpha p-1\right)/\left(2\alpha -1\right)-p\ge 0$, which implies that $q>p$. So, one gets by $E{X}^{p}<\mathrm{\infty }$ that

$\begin{array}{rcl}{H}_{11}^{\ast }& =& \sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-q\alpha }\sum _{i=1}^{n}E\left[{X}^{q}I\left({\left(i-1\right)}^{\alpha }
(3.12)

By the proof of (3.3), it follows

${H}_{12}^{\ast }=\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha p-1-\alpha }E\left[XI\left(X>{n}^{\alpha }\right)\right]\le {C}_{9}E{X}^{p}<\mathrm{\infty }.$
(3.13)

Therefore, by (3.9)-(3.13), it has ${H}_{1}<\mathrm{\infty }$. Consequently, it completes the proof of (2.2).

Finally, by the fact that $\alpha p>1$, similar to the proof of (3.4) in Yang et al. , it is easy to see that (2.3) holds for the case $\alpha p<2+\alpha$ and the case $\alpha p\ge 2+\alpha$. □

Proof of Theorem 2.3 Similar to the proof of Theorem 2.1, by Lemma 1.2, it can be checked that

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{-2}E{\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|-\epsilon {n}^{\alpha }\right)}^{+}\\ \phantom{\rule{1em}{0ex}}\le {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]{|}^{2}\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{n=1}^{\mathrm{\infty }}{n}^{-2}E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}I\left(|{X}_{i}|>{n}^{\alpha }\right)|\right)\\ \phantom{\rule{2em}{0ex}}+\sum _{n=1}^{\mathrm{\infty }}{n}^{-2}E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)|\right)\\ \phantom{\rule{1em}{0ex}}:={J}_{1}+{J}_{2}+{J}_{3}.\end{array}$
(3.14)

Similarly to the proof of (3.3), we have

$\begin{array}{rcl}{J}_{2}& \le & {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}E\left[XI\left(X>{n}^{\alpha }\right)\right]\\ =& {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}\sum _{m=n}^{\mathrm{\infty }}E\left[XI\left({m}^{\alpha }
(3.15)

Meanwhile, by the proofs of (3.4) and (3.15), we get

${J}_{3}\le {C}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{-1}E\left[XI\left(X>{n}^{\alpha }\right)\right]\le {C}_{2}E\left[Xln\left(1+X\right)\right]<\mathrm{\infty }.$
(3.16)

On the other hand, by the proof of (3.5), it can be checked that for $\alpha >0$,

$\begin{array}{rcl}{J}_{1}& \le & {C}_{2}\sum _{n=1}^{\mathrm{\infty }}{n}^{-2-\alpha }\sum _{i=1}^{n}E{\left({A}_{i}{X}_{ni}\right)}^{2}={C}_{2}\sum _{n=1}^{\mathrm{\infty }}{n}^{-2-\alpha }\sum _{i=1}^{n}E{A}_{i}^{2}E{X}_{ni}^{2}\\ \le & {C}_{3}\sum _{n=1}^{\mathrm{\infty }}{n}^{-1-\alpha }E\left[{X}^{2}I\left(X\le {n}^{\alpha }\right)\right]+{C}_{4}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -1}P\left(X>{n}^{\alpha }\right)\\ \le & {C}_{3}\sum _{n=1}^{\mathrm{\infty }}{n}^{-1-\alpha }\sum _{i=1}^{n}E\left[{X}^{2}I\left({\left(i-1\right)}^{\alpha }{n}^{\alpha }\right)\right]\\ \le & {C}_{3}\sum _{i=1}^{\mathrm{\infty }}E\left[{X}^{2}I\left({\left(i-1\right)}^{\alpha }
(3.17)

Therefore, by (3.14)-(3.17), one gets (2.5) immediately. Similar to the proof of (2.3), it is easy to have (2.6). Obviously, by the proof of (2.11) in Remark 2.2, (2.7) also holds under the conditions of Theorem 2.3. Finally, by the proof of Theorem 12.1 of Gut  and the proof of (3.2) in Yang et al. , for $\alpha >1$, it is easy to get (2.8). □

Proof of Theorem 2.4 For $n\ge 1$, we also denote ${X}_{ni}={X}_{i}I\left(|{X}_{i}|\le {n}^{\alpha }\right)$, $1\le i\le n$. It is easy to see that

$P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{i}|>\epsilon {n}^{\alpha }\right)\le \sum _{i=1}^{n}P\left(|{X}_{i}|>{n}^{\alpha }\right)+P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}{A}_{i}{X}_{ni}|>\epsilon {n}^{\alpha }\right).$
(3.18)

For the case of $\alpha =1$, there exists a $\delta >0$ such that ${lim}_{n\to \mathrm{\infty }}\frac{{max}_{1\le i\le n}E\left[{|{X}_{i}|}^{1+\delta }|{\mathcal{G}}_{i-1}\right]}{{n}^{\delta }}=0$, a.s. So by $E\left({A}_{n}{X}_{n}|{\mathcal{G}}_{n-1}\right)=0$, a.s., $n\ge 1$, we can check that

$\begin{array}{rcl}\frac{1}{{n}^{\alpha }}\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)|\right)& =& \frac{1}{n}\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left[{A}_{i}{X}_{i}I\left(|{X}_{i}|\le n\right)|{\mathcal{G}}_{i-1}\right]|\right)\\ =& \frac{1}{n}\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left[{A}_{i}{X}_{i}I\left(|{X}_{i}|>n\right)|{\mathcal{G}}_{i-1}\right]|\right)\\ \le & \frac{1}{n}\sum _{i=1}^{n}E|{A}_{i}|E\left[|{X}_{i}|I\left(|{X}_{i}|>n\right)|{\mathcal{G}}_{i-1}\right]\\ \le & \frac{1}{{n}^{1+\delta }}\sum _{i=1}^{n}E|{A}_{i}|E\left[{|{X}_{i}|}^{1+\delta }|{\mathcal{G}}_{i-1}\right]\\ \le & \frac{K}{{n}^{\delta }}\underset{1\le i\le n}{max}E\left[{|{X}_{i}|}^{1+\delta }|{\mathcal{G}}_{i-1}\right]\to 0,\phantom{\rule{1em}{0ex}}\text{a.s.},\end{array}$

as $n\to \mathrm{\infty }$.

Otherwise, for the case of $\alpha >1$, it is assumed that ${lim}_{n\to \mathrm{\infty }}\frac{{max}_{1\le i\le n}E\left[|{X}_{i}||{\mathcal{G}}_{i-1}\right]}{{n}^{\lambda }}=0$, a.s., for any $\lambda >0$. Consequently, for any $\alpha >1$, it follows that

$\begin{array}{rcl}\frac{1}{{n}^{\alpha }}\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)|\right)& =& \frac{1}{{n}^{\alpha }}\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}E\left[{A}_{i}{X}_{i}I\left(|{X}_{i}|\le n\right)|{\mathcal{G}}_{i-1}\right]|\right)\\ \le & \frac{1}{{n}^{\alpha }}\sum _{i=1}^{n}E|{A}_{i}|E\left[|{X}_{i}|I\left(|{X}_{i}|\le n\right)|{\mathcal{G}}_{i-1}\right]\\ \le & \frac{{K}_{1}}{{n}^{\alpha -1}}\underset{1\le i\le n}{max}E\left[|{X}_{i}||{\mathcal{G}}_{i-1}\right]\to 0,\phantom{\rule{1em}{0ex}}\text{a.s.},\end{array}$

as $n\to \mathrm{\infty }$. Meanwhile,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -2}\sum _{i=1}^{n}P\left(|{X}_{i}|>{n}^{\alpha }\right)\le {K}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -1}P\left(X>{n}^{\alpha }\right)\le {K}_{2}EX<\mathrm{\infty }.$
(3.19)

By (3.18) and (3.19), to prove (2.7), it suffices to show that

${I}_{3}=\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -2}P\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]|>\frac{\epsilon {n}^{\alpha }}{2}\right)<\mathrm{\infty }.$

Obviously, by Markov’s inequality and the proofs of (3.5), (3.6), (3.19), one can check that

$\begin{array}{rcl}{I}_{3}& \le & \frac{4}{{\epsilon }^{2}}\sum _{n=1}^{\mathrm{\infty }}{n}^{-2-\alpha }E\left(\underset{1\le k\le n}{max}|\sum _{i=1}^{k}\left[{A}_{i}{X}_{ni}-E\left({A}_{i}{X}_{ni}|{\mathcal{G}}_{i-1}\right)\right]{|}^{2}\right)\\ \le & {K}_{1}\sum _{n=1}^{\mathrm{\infty }}{n}^{-1-\alpha }E\left[{X}^{2}I\left(X\le {n}^{\alpha }\right)\right]+{K}_{2}\sum _{n=1}^{\mathrm{\infty }}{n}^{\alpha -1}P\left(X>{n}^{\alpha }\right)\\ \le & {K}_{3}EX<\mathrm{\infty }.\end{array}$

On the other hand, by proof of Theorem 12.1 of Gut  and the proof of (3.2) in Yang et al. , we can easily obtain (2.8) for $\alpha >1$. □

## References

1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33(2):25–31. 10.1073/pnas.33.2.25

2. Erdös P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 1949, 20(2):286–291. 10.1214/aoms/1177730037

3. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965, 120(1):108–123. 10.1090/S0002-9947-1965-0198524-1

4. Shao QM: A moment inequality and its application. Acta Math. Sin. 1988, 31A(6):736–747.

5. Yu KF: Complete convergence of weighted sums of martingale differences. J. Theor. Probab. 1990, 3(2):339–347. 10.1007/BF01045165

6. Ghosal S, Chandra TK: Complete convergence of martingale arrays. J. Theor. Probab. 1998, 11(3):621–631. 10.1023/A:1022646429754

7. Stoica G: Baum-Katz-Nagaev type results for martingales. J. Math. Anal. Appl. 2007, 336(2):1489–1492. 10.1016/j.jmaa.2007.03.012

8. Stoica G: A note on the rate of convergence in the strong law of large numbers for martingales. J. Math. Anal. Appl. 2011, 381(2):910–913. 10.1016/j.jmaa.2011.04.008

9. Wang XJ, Hu SH, Yang WZ, Wang XH: Convergence rates in the strong law of large numbers for martingale difference sequences. Abstr. Appl. Anal. 2012., 2012: Article ID 572493

10. Yang WZ, Hu SH, Wang XJ: Complete convergence for moving average process of martingale differences. Discrete Dyn. Nat. Soc. 2012., 2012: Article ID 128492

11. Gut A: Probability: A Graduate Course. Springer, New York; 2005.

12. Chen PY, Hu T-C, Volodin A: Limiting behaviour of moving average processes under φ -mixing assumption. Stat. Probab. Lett. 2009, 79(1):105–111. 10.1016/j.spl.2008.07.026

13. Sung SH: Moment inequalities and complete moment convergence. J. Inequal. Appl. 2009., 2009: Article ID 271265

14. Sung SH:Complete convergence for weighted sums of ${\rho }^{\ast }$-mixing random variables. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 630608

15. Sung SH: Convergence of moving average processes for dependent random variables. Commun. Stat., Theory Methods 2011, 40(13):2366–2376. 10.1080/03610921003797761

16. Sung SH: Complete q th moment convergence for arrays of random variables. J. Inequal. Appl. 2013., 2013: Article ID 24

17. Sung SH, Volodin A: A note on the rate of complete convergence for weighted sums of arrays of Banach space valued random elements. Stoch. Anal. Appl. 2011, 29(2):282–291. 10.1080/07362994.2011.548670

18. Hu T-C, Rosalsky A, Volodin A: A complete convergence theorem for row sums from arrays of rowwise independent random elements in Rademacher type p Banach spaces. Stoch. Anal. Appl. 2012, 30(2):343–353. 10.1080/07362994.2012.649630

19. Thanh LV, Yin G: Almost sure and complete convergence of randomly weighted sums of independent random elements in Banach spaces. Taiwan. J. Math. 2011, 15(4):1759–1781.

20. Cabrera MO, Rosalsky A, Volodin A: Some theorems on conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables. Test 2012, 21(2):369–385. 10.1007/s11749-011-0248-0

21. Hall P, Heyde CC: Martingale Limit Theory and Its Application. Academic Press, New York; 1980.

22. Huang DW, Guo L: Estimation of nonstationary ARMAX models based on the Hannan-Rissanen method. Ann. Stat. 1990, 18(4):1729–1756. 10.1214/aos/1176347875

23. Thanh LV, Yin G, Wang LY: State observers with random sampling times and convergence analysis of double-indexed and randomly weighted sums of mixing processes. SIAM J. Control Optim. 2011, 49(1):106–124. 10.1137/10078671X

## Acknowledgements

The authors are most grateful to editor prof. Soo Hak Sung and two anonymous referees for their careful reading and insightful comments, which helped to significantly improve an earlier version of this paper. Supported by the NNSF of China (11171001, 11201001), Natural Science Foundation of Anhui Province (1208085QA03, 1308085QA03), Talents Youth Fund of Anhui Province Universities (2012SQRL204) and Doctoral Research Start-up Funds Projects of Anhui University.

## Author information

Authors

### Corresponding author

Correspondence to Shuhe Hu.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Yang, W., Wang, Y., Wang, X. et al. Complete moment convergence for randomly weighted sums of martingale differences. J Inequal Appl 2013, 396 (2013). https://doi.org/10.1186/1029-242X-2013-396 