# Strong convergence results for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables

## Abstract

In this paper, the authors study the strong convergence for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables without assumption of identical distribution. As an application, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables is obtained.

MSC: 60F15.

## 1 Introduction

Many useful linear statistics based on a random sample are weighted sums of independent and identically distributed random variables. Examples include least-squares estimators, nonparametric regression function estimators and jackknife estimates, among others. In this respect, studies of strong laws for these weighted sums have demonstrated significant progress in probability theory with applications in mathematical statistics. The main purpose of the paper is to further study the strong laws for these weighted sums of $\stackrel{˜}{\rho }$-mixing random variables.

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables defined on a fixed probability space $\left(\mathrm{\Omega },\mathcal{F},P\right)$. Write $\mathcal{FS}=\sigma$ (${X}_{i}$, $i\in S\subset \mathbb{N}$). Given σ-algebras , in , let

$\rho \left(\mathcal{B},\mathcal{R}\right)=\underset{X\in {L}_{2}\left(\mathcal{B}\right),Y\in {L}_{2}\left(\mathcal{R}\right)}{sup}\frac{|EXY-EXEY|}{{\left(VarXVarY\right)}^{1/2}}.$

Define the $\stackrel{˜}{\rho }$-mixing coefficients by

Obviously, $0\le \stackrel{˜}{\rho }\left(k+1\right)\le \stackrel{˜}{\rho }\left(k\right)\le 1$ and $\stackrel{˜}{\rho }\left(0\right)=1$.

Definition 1.1 A sequence of random variables $\left\{{X}_{n},n\ge 1\right\}$ is said to be $\stackrel{˜}{\rho }$-mixing if there exists $k\in \mathbb{N}$ such that $\stackrel{˜}{\rho }\left(k\right)<1$.

The concept of the coefficient $\stackrel{˜}{\rho }$ was introduced by Moore [1], and Bradley [2] was the first who introduced the concept of $\stackrel{˜}{\rho }$-mixing random variables to limit theorems. Since then, many applications have been found. See, for example, Bradley [2] for the central limit theorem, Bryc and Smolenski [3], Peligrad and Gut [4], and Utev and Peligrad [5] for moment inequalities, Gan [6], Kuczmaszewska [7], Wu and Jiang [8] and Wang et al. [9] for almost sure convergence, Peligrad and Gut [4], Gan [6], Cai [10], Kuczmaszewska [11], Zhu [12], An and Yuan [13] and Wang et al. [14] for complete convergence, Peligrad [15] for invariance principle, Wu and Jiang [16] for strong limit theorems for weighted product sums of $\stackrel{˜}{\rho }$-mixing sequences of random variables, Wu and Jiang [17] for Chover-type laws of the k-iterated logarithm, Wu [18] for strong consistency of estimator in linear model, Wang et al. [19] for complete consistency of the estimator of nonparametric regression models, Wu et al. [20] and Guo and Zhu [21] for complete moment convergence, and so forth. When these are compared with the corresponding results of independent random variable sequences, there still remains much to be desired. So, studying the limit behavior of $\stackrel{˜}{\rho }$-mixing random variables is of interest.

Let $\left\{{X}_{i},i\ge 1\right\}$ be a sequence of independent observations from a population distribution. A common expression for these linear statistics is ${T}_{n}\doteq {\sum }_{i=1}^{n}{a}_{ni}{X}_{i}$, where the weights ${a}_{ni}$ are either real constants or random variables independent of ${X}_{i}$. Using an observation of the Bernstein’s inequality by Cheng [22], Bai et al. [23] established an extension of the Hardy-Littlewood strong law for linear statistics ${T}_{n}$. This complements a result of Cuzick [[24], Theorem 2.2]. For more details about the strong law for linear statistics ${T}_{n}$, one can refer to Bai and Cheng [25], Sung [26, 27], Cai [28], Jing and Liang [29], Zhou et al. [30], Wang et al. [3133] and Wu and Chen [34], and so forth.

Recently, Sung [26] obtained the following strong convergence result for weighted sums of identically distributed negatively associated random variables.

Theorem 1.1 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of identically distributed negatively associated random variables, and let $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ be an array of constants satisfying

$\sum _{i=1}^{n}|{a}_{ni}{|}^{\alpha }=O\left(n\right)$
(1.1)

for some $0<\alpha \le 2$. Let ${b}_{n}={n}^{1/\alpha }{log}^{1/\gamma }n$ for some $\gamma >0$. Furthermore, suppose that $E{X}_{1}=0$ when $1<\alpha \le 2$. If

$\begin{array}{r}E{|{X}_{1}|}^{\alpha }<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathit{\text{for}}\alpha >\gamma ,\\ E{|{X}_{1}|}^{\alpha }log\left(1+|{X}_{1}|\right)<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathit{\text{for}}\alpha =\gamma ,\\ E{|{X}_{1}|}^{\gamma }<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathit{\text{for}}\alpha <\gamma ,\end{array}$
(1.2)

then

$\sum _{n=1}^{\mathrm{\infty }}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)<\mathrm{\infty }\phantom{\rule{1em}{0ex}}\mathit{\text{for all}}\epsilon >0.$
(1.3)

Zhou et al. [30] partially extended Theorem 1.1 for negatively associated random variables to the case of $\stackrel{˜}{\rho }$-mixing random variables as follows.

Theorem 1.2 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of identically distributed $\stackrel{˜}{\rho }$-mixing random variables, and let $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ be an array of constants satisfying

$\sum _{i=1}^{n}|{a}_{ni}{|}^{max\left\{\alpha ,\gamma \right\}}=O\left(n\right)$
(1.4)

for some $0<\alpha \le 2$ and $\gamma >0$ with $\alpha \ne \gamma$. Let ${b}_{n}={n}^{1/\alpha }{log}^{1/\gamma }n$. If $E{X}_{1}=0$ for $1<\alpha \le 2$ and (1.2) holds for $\alpha \ne \gamma$, then (1.3) holds.

Zhou et al. [30] left an open problem whether the case $\alpha =\gamma$ of Theorem 1.1 holds for $\stackrel{˜}{\rho }$-mixing random variables. Sung [27] solved the open problem and obtained the following strong convergence result for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables.

Theorem 1.3 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of identically distributed $\stackrel{˜}{\rho }$-mixing random variables, and let $\left\{{a}_{ni},1\le i\le n,n\ge 1\right\}$ be an array of constants satisfying (1.1) for some $0<\alpha \le 2$. Let ${b}_{n}={n}^{1/\alpha }{log}^{1/\alpha }n$. If $E{X}_{1}=0$ for $1<\alpha \le 2$ and $E{|{X}_{1}|}^{\alpha }log\left(1+|{X}_{1}|\right)<\mathrm{\infty }$, then (1.3) holds.

Sung [27] also left an open problem whether the case $\alpha <\gamma$ of Theorem 1.1 holds for $\stackrel{˜}{\rho }$-mixing random variables. In this paper, we will partially solve the open problem using a different method from Zhou et al. [30] and Sung [27]. As an application, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables is obtained. The results presented in this paper are obtained by using the truncated method and the Rosenthal-type inequality of $\stackrel{˜}{\rho }$-mixing random variables (Lemma 2.1 in Section 2).

Throughout the paper, let $I\left(A\right)$ be the indicator function of the set A. C denotes a positive constant, which may be different in various places, and ${a}_{n}=O\left({b}_{n}\right)$ stands for ${a}_{n}\le C{b}_{n}$. Denote $logx=lnmax\left(x,e\right)$.

## 2 Main results

Firstly, let us recall the definition of stochastic domination which will be used frequently in the paper.

Definition 2.1 A sequence of random variables $\left\{{X}_{n},n\ge 1\right\}$ is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

$P\left(|{X}_{n}|>x\right)\le CP\left(|X|>x\right)$

for all $x\ge 0$ and $n\ge 1$.

To prove the main results of the paper, we need the following two lemmas. The first one is the Rosenthal-type inequality for $\stackrel{˜}{\rho }$-mixing random variables. The proof can be found in Utev and Peligrad [5].

Lemma 2.1 (Utev and Peligrad [5])

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables with $E{X}_{n}=0$, $E{|{X}_{n}|}^{p}<\mathrm{\infty }$ for some $p\ge 2$ and each $n\ge 1$. Then there exists a positive constant C depending only on p such that

$E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{i}{|}^{p}\right)\le C\left\{\sum _{i=1}^{n}E{|{X}_{i}|}^{p}+{\left(\sum _{i=1}^{n}E{X}_{i}^{2}\right)}^{p/2}\right\}.$

The next one is the basic property for stochastic domination. The proof is standard, so we omit it.

Lemma 2.2 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables which is stochastically dominated by a random variable X. For any $\alpha >0$ and $b>0$, the following two statements hold:

$\begin{array}{c}E{|{X}_{n}|}^{\alpha }I\left(|{X}_{n}|\le b\right)\le {C}_{1}\left[E{|X|}^{\alpha }I\left(|X|\le b\right)+{b}^{\alpha }P\left(|X|>b\right)\right],\hfill \\ E{|{X}_{n}|}^{\alpha }I\left(|{X}_{n}|>b\right)\le {C}_{2}E{|X|}^{\alpha }I\left(|X|>b\right),\hfill \end{array}$

where ${C}_{1}$ and ${C}_{2}$ are positive constants. Consequently, $E{|{X}_{n}|}^{\alpha }\le CE{|X|}^{\alpha }$.

Our main results are as follows.

Theorem 2.1 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X, and let $\left\{{a}_{ni},i\ge 1,n\ge 1\right\}$ be an array of constants. Assume that the following two conditions are satisfied:

(i) There exist some δ with $0<\delta <1$ and some α with $0<\alpha \le 2$ such that ${\sum }_{i=1}^{n}{|{a}_{ni}|}^{\alpha }=O\left({n}^{\delta }\right)$, and assume further that $E{X}_{n}=0$ when $1<\alpha \le 2$;

(ii) $p\ge 1/\alpha$. For some $\beta >max\left\{p{\alpha }^{2},\alpha +\frac{\alpha \left(p\alpha -1\right)}{1-\delta },\alpha +2,\alpha \left(p\alpha -1\right)+2\delta \right\}$, $E{|X|}^{\beta }<\mathrm{\infty }$.

Then for any $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{p\alpha -2}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)<\mathrm{\infty },$
(2.1)

where ${b}_{n}\doteq {n}^{1/\alpha }{log}^{1/\gamma }n$ for some $\gamma >0$.

Similar to the proof of Theorem 2.1 and by weakening the condition (i) of Theorem 2.1 (i.e., ${\sum }_{i=1}^{n}{|{a}_{ni}|}^{\alpha }=O\left({n}^{\delta }\right)$ is replaced by ${\sum }_{i=1}^{n}{|{a}_{ni}|}^{\alpha }=O\left(n\right)$), we can get the following strong convergence result for the special case $p\alpha =1$. The proof is omitted.

Theorem 2.2 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X, and let $\left\{{a}_{ni},i\ge 1,n\ge 1\right\}$ be an array of constants. Assume that there exists some α with $0<\alpha \le 2$ such that ${\sum }_{i=1}^{n}{|{a}_{ni}|}^{\alpha }=O\left(n\right)$, and assume further that $E{X}_{n}=0$ when $1<\alpha \le 2$. If there exists some $\beta >\alpha +2$ such that $E{|X|}^{\beta }<\mathrm{\infty }$, then for any $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)<\mathrm{\infty },$
(2.2)

where ${b}_{n}\doteq {n}^{1/\alpha }{log}^{1/\gamma }n$ for some $\gamma >0$.

If the array of constants $\left\{{a}_{ni},i\ge 1,n\ge 1\right\}$ is replaced by a sequence of constants $\left\{{a}_{n},n\ge 1\right\}$, then we can get the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums ${S}_{n}\doteq {\sum }_{i=1}^{n}{a}_{i}{X}_{i}$ of $\stackrel{˜}{\rho }$-mixing sequence of random variables as follows.

Theorem 2.3 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of $\stackrel{˜}{\rho }$-mixing random variables which is stochastically dominated by a random variable X, and let $\left\{{a}_{n},n\ge 1\right\}$ be a sequence of constants. Assume that there exists some α with $0<\alpha \le 2$ such that ${\sum }_{i=1}^{n}{|{a}_{i}|}^{\alpha }=O\left(n\right)$, and assume further that $E{X}_{n}=0$ when $1<\alpha \le 2$. If there exists some $\beta >\alpha +2$ such that $E{|X|}^{\beta }<\mathrm{\infty }$, then for any $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|{S}_{j}|>\epsilon {b}_{n}\right)<\mathrm{\infty }$
(2.3)

and

$\underset{n\to \mathrm{\infty }}{lim}\frac{{S}_{n}}{{b}_{n}}=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}},$
(2.4)

where ${b}_{n}\doteq {n}^{1/\alpha }{log}^{1/\gamma }n$ for some $\gamma >0$ and ${S}_{n}\doteq {\sum }_{i=1}^{n}{a}_{i}{X}_{i}$ for $n\ge 1$.

Remark 2.1 In Theorem 2.2, we not only consider the case $\alpha <\gamma$, but also consider the cases $\alpha >\gamma$ and $\alpha =\gamma$. The only defect is that our moment condition ‘$E{|X|}^{\beta }<\mathrm{\infty }$ for some $\beta >\alpha +2$’ is stronger than the corresponding one of Theorem 1.1. So, our main result partially settles the open problem posed by Sung [27]. In addition, we extend the results of Zhou et al. [30] and Sung [27] for identically distributed $\stackrel{˜}{\rho }$-mixing random variables to the case of non-identical distribution.

## 3 The proofs

Proof of Theorem 2.1 For fixed $n\ge 1$, define

${X}_{i}^{\left(n\right)}={X}_{i}I\left(|{X}_{i}|\le {b}_{n}\right),\phantom{\rule{1em}{0ex}}i\ge 1,\phantom{\rule{2em}{0ex}}{T}_{j}^{\left(n\right)}=\sum _{i=1}^{j}{a}_{ni}\left({X}_{i}^{\left(n\right)}-E{X}_{i}^{\left(n\right)}\right),\phantom{\rule{1em}{0ex}}j=1,2,\dots ,n.$

It is easy to check that for any $\epsilon >0$,

$\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)\subset \left(\underset{1\le i\le n}{max}|{X}_{i}|>{b}_{n}\right)\cup \left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}^{\left(n\right)}|>\epsilon {b}_{n}\right),$

which implies that

$\begin{array}{r}P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)\\ \phantom{\rule{1em}{0ex}}\le P\left(\underset{1\le i\le n}{max}|{X}_{i}|>{b}_{n}\right)+P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}^{\left(n\right)}|>\epsilon {b}_{n}\right)\\ \phantom{\rule{1em}{0ex}}\le \sum _{i=1}^{n}P\left(|{X}_{i}|>{b}_{n}\right)+P\left(\underset{1\le j\le n}{max}|{T}_{j}^{\left(n\right)}|>\epsilon {b}_{n}-\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}E{X}_{i}^{\left(n\right)}|\right).\end{array}$
(3.1)

Firstly, we will show that

(3.2)

By ${\sum }_{i=1}^{n}{|{a}_{ni}|}^{\alpha }=O\left({n}^{\delta }\right)$ and Hölder’s inequality, we have for $1\le k<\alpha$ that

$\sum _{i=1}^{n}{|{a}_{ni}|}^{k}\le {\left(\sum _{i=1}^{n}{\left({|{a}_{ni}|}^{k}\right)}^{\frac{\alpha }{k}}\right)}^{\frac{k}{\alpha }}{\left(\sum _{i=1}^{n}1\right)}^{\frac{\alpha -k}{\alpha }}\le Cn.$
(3.3)

Hence, when $1<\alpha \le 2$, we have by $E{X}_{n}=0$, Lemma 2.2, (3.3) (taking $k=1$), Markov’s inequality and condition (ii) that

(3.4)

Elementary Jensen’s inequality implies that for any $0,

${\left(\sum _{i=1}^{n}{|{a}_{ni}|}^{t}\right)}^{1/t}\le {\left(\sum _{i=1}^{n}{|{a}_{ni}|}^{s}\right)}^{1/s}.$
(3.5)

Therefore, when $0<\alpha \le 1$, we have by Lemma 2.2, (3.5), Markov’s inequality and condition (ii) that

(3.6)

Equations (3.4) and (3.6) yield (3.2). Hence, for n large enough,

$P\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{a}_{ni}{X}_{i}|>\epsilon {b}_{n}\right)\le \sum _{i=1}^{n}P\left(|{X}_{i}|>{b}_{n}\right)+P\left(\underset{1\le j\le n}{max}|{T}_{j}^{\left(n\right)}|>\frac{\epsilon }{2}{b}_{n}\right).$

To prove (2.1), we only need to show that

$I\doteq \sum _{n=1}^{\mathrm{\infty }}{n}^{p\alpha -2}\sum _{i=1}^{n}P\left(|{X}_{i}|>{b}_{n}\right)<\mathrm{\infty }$
(3.7)

and

$J\doteq \sum _{n=1}^{\mathrm{\infty }}{n}^{p\alpha -2}P\left(\underset{1\le j\le n}{max}|{T}_{j}^{\left(n\right)}|>\frac{\epsilon }{2}{b}_{n}\right)<\mathrm{\infty }.$
(3.8)

By the definition of stochastic domination, Markov’s inequality and condition (ii), we can see that

(3.9)

For $q\ge 2$, it follows from Lemma 2.1, ${C}_{r}$-inequality and Jensen’s inequality that

$\begin{array}{rcl}J& \doteq & \sum _{n=1}^{\mathrm{\infty }}{n}^{p\alpha -2}P\left(\underset{1\le j\le n}{max}|{T}_{j}^{\left(n\right)}|>\frac{\epsilon }{2}{b}_{n}\right)\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}E\left(\underset{1\le j\le n}{max}|{T}_{j}^{\left(n\right)}{|}^{q}\right)\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}\left[\sum _{i=1}^{n}{|{a}_{ni}|}^{q}E|{X}_{i}^{\left(n\right)}-E{X}_{i}^{\left(n\right)}{|}^{q}+{\left(\sum _{i=1}^{n}{|{a}_{ni}|}^{2}E|{X}_{i}^{\left(n\right)}-E{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{q/2}\right]\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}\sum _{i=1}^{n}{|{a}_{ni}|}^{q}E|{X}_{i}^{\left(n\right)}{|}^{q}\\ +C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}{\left(\sum _{i=1}^{n}{|{a}_{ni}|}^{2}E|{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{q/2}\\ \doteq & {J}_{1}+{J}_{2}.\end{array}$
(3.10)

Take a suitable constant q such that $max\left\{2,\alpha \left(p\alpha -1\right)/\left(1-\delta \right)\right\}, which implies that

$\beta >\alpha +q,\phantom{\rule{2em}{0ex}}\beta /\alpha -q/\alpha >1,\phantom{\rule{2em}{0ex}}\beta >p{\alpha }^{2}-\alpha +q\delta ,\phantom{\rule{2em}{0ex}}\beta /\alpha -p\alpha +2-q\delta /\alpha >1$

and

$p\alpha -2+q\delta /\alpha -q/\alpha <-1,\phantom{\rule{1em}{0ex}}q>\alpha .$

It follows from Lemma 2.2, (3.5), Markov’s inequality and condition (ii) that

$\begin{array}{rcl}{J}_{1}& \doteq & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}\sum _{i=1}^{n}{|{a}_{ni}|}^{q}E|{X}_{i}^{\left(n\right)}{|}^{q}\\ =& C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}\sum _{i=1}^{n}{|{a}_{ni}|}^{q}E|{X}_{i}{|}^{q}I\left(|{X}_{i}|\le {b}_{n}\right)\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}\sum _{i=1}^{n}{|{a}_{ni}|}^{q}\left[E{|X|}^{q}I\left(|X|\le {b}_{n}\right)+{b}_{n}^{q}P\left(|X|>{b}_{n}\right)\right]\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }{b}_{n}^{-q}E{|X|}^{q}I\left(|X|\le {b}_{n}\right)+C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }P\left(|X|>{b}_{n}\right)\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }{b}_{n}^{-q}\sum _{k=2}^{n}E{|X|}^{q}I\left({b}_{k-1}<|X|\le {b}_{k}\right)+C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }\frac{E{|X|}^{\beta }}{{b}_{n}^{\beta }}\\ \le & C\sum _{k=2}^{\mathrm{\infty }}\sum _{n=k}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha -q/\alpha }{\left(logn\right)}^{-q/\gamma }{b}_{k}^{q}P\left(|X|>{b}_{k-1}\right)+C\sum _{n=2}^{\mathrm{\infty }}\frac{{n}^{p\alpha -2+q\delta /\alpha }}{{n}^{\beta /\alpha }{log}^{\beta /\gamma }n}\\ \le & C\sum _{k=3}^{\mathrm{\infty }}{b}_{k}^{q}\frac{E{|X|}^{\beta }}{{b}_{k-1}^{\beta }}+C\sum _{n=2}^{\mathrm{\infty }}\frac{1}{{n}^{\beta /\alpha -p\alpha +2-q\delta /\alpha }{log}^{\beta /\gamma }n}\\ \le & C\sum _{k=3}^{\mathrm{\infty }}\frac{{k}^{q/\alpha }{log}^{q/\gamma }k}{{\left(k-1\right)}^{\beta /\alpha }{log}^{\beta /\gamma }\left(k-1\right)}+C\sum _{n=2}^{\mathrm{\infty }}\frac{1}{{n}^{\beta /\alpha -p\alpha +2-q\delta /\alpha }{log}^{\beta /\gamma }n}\\ \le & C\sum _{k=3}^{\mathrm{\infty }}\frac{1}{{k}^{\beta /\alpha -q/\alpha }}+C\sum _{n=2}^{\mathrm{\infty }}\frac{1}{{n}^{\beta /\alpha -p\alpha +2-q\delta /\alpha }{log}^{\beta /\gamma }n}<\mathrm{\infty }.\end{array}$
(3.11)

By Lemma 2.2 again, (3.5), ${C}_{r}$-inequality and Jensen’s inequality, we can get that

$\begin{array}{rcl}{J}_{2}& \doteq & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}{\left(\sum _{i=1}^{n}{|{a}_{ni}|}^{2}E|{X}_{i}^{\left(n\right)}{|}^{2}\right)}^{q/2}\\ =& C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}{\left(\sum _{i=1}^{n}{|{a}_{ni}|}^{2}E{|{X}_{i}|}^{2}I\left(|{X}_{i}|\le {b}_{n}\right)\right)}^{q/2}\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2}{b}_{n}^{-q}{\left[\sum _{i=1}^{n}{|{a}_{ni}|}^{2}\left(E{X}^{2}I\left(|X|\le {b}_{n}\right)+{b}_{n}^{2}P\left(|X|>{b}_{n}\right)\right)\right]}^{q/2}\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }{b}_{n}^{-q}{\left[E{X}^{2}I\left(|X|\le {b}_{n}\right)+{b}_{n}^{2}P\left(|X|>{b}_{n}\right)\right]}^{q/2}\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }{b}_{n}^{-q}{\left[E{X}^{2}I\left(|X|\le {b}_{n}\right)\right]}^{q/2}+C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }{\left[P\left(|X|>{b}_{n}\right)\right]}^{q/2}\\ \le & C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }{b}_{n}^{-q}E{|X|}^{q}I\left(|X|\le {b}_{n}\right)+C\sum _{n=2}^{\mathrm{\infty }}{n}^{p\alpha -2+q\delta /\alpha }P\left(|X|>{b}_{n}\right)\\ <& \mathrm{\infty }\phantom{\rule{1em}{0ex}}\text{(similar to the proof of (3.11))}.\end{array}$
(3.12)

Therefore, the desired result (2.1) follows from (3.9)-(3.12) immediately. This completes the proof of the theorem. □

Proof of Theorem 2.3 Similar to the proof of Theorem 2.1, we can get (2.3) immediately. Therefore,

$\begin{array}{rcl}\mathrm{\infty }& >& \sum _{n=1}^{\mathrm{\infty }}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|{S}_{j}|>\epsilon {b}_{n}\right)\\ =& \sum _{i=0}^{\mathrm{\infty }}\sum _{n={2}^{i}}^{{2}^{i+1}-1}\frac{1}{n}P\left(\underset{1\le j\le n}{max}|{S}_{j}|>\epsilon {n}^{\frac{1}{\alpha }}{\left(logn\right)}^{\frac{1}{\gamma }}\right)\ge \frac{1}{2}\sum _{i=1}^{\mathrm{\infty }}P\left(\underset{1\le j\le {2}^{i}}{max}|{S}_{j}|>\epsilon {2}^{\frac{i+1}{\alpha }}{\left(log{2}^{i+1}\right)}^{\frac{1}{\gamma }}\right).\end{array}$

By the Borel-Cantelli lemma, we obtain that

$\underset{i\to \mathrm{\infty }}{lim}\frac{{max}_{1\le j\le {2}^{i}}|{S}_{j}|}{{2}^{\frac{i+1}{\alpha }}{\left(log{2}^{i+1}\right)}^{\frac{1}{\gamma }}}=0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.13)

For all positive integers n, there exists a positive integer ${i}_{0}$ such that ${2}^{{i}_{0}-1}\le n<{2}^{{i}_{0}}$. We have by (3.13) that

which implies (2.4). This completes the proof of the theorem. □

## References

1. Moore CC: The degree of randomness in a stationary time series. Ann. Math. Stat. 1963, 34: 1253–1258. 10.1214/aoms/1177703860

2. Bradley RC: On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 1992, 5: 355–374. 10.1007/BF01046741

3. Bryc W, Smolenski W: Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Am. Math. Soc. 1993, 119(2):629–635. 10.1090/S0002-9939-1993-1149969-7

4. Peligrad M, Gut A: Almost sure results for a class of dependent random variables. J. Theor. Probab. 1999, 12: 87–104. 10.1023/A:1021744626773

5. Utev S, Peligrad M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 2003, 16(1):101–115. 10.1023/A:1022278404634

6. Gan SX:Almost sure convergence for $\stackrel{˜}{\rho }$-mixing random variable sequences. Stat. Probab. Lett. 2004, 67: 289–298. 10.1016/j.spl.2003.12.011

7. Kuczmaszewska A:On Chung-Teicher type strong law of large numbers for ${\rho }^{\ast }$-mixing random variables. Discrete Dyn. Nat. Soc. 2008., 2008: Article ID 140548

8. Wu QY, Jiang YY:Some strong limit theorems for $\stackrel{˜}{\rho }$-mixing sequences of random variables. Stat. Probab. Lett. 2008, 78(8):1017–1023. 10.1016/j.spl.2007.09.061

9. Wang XJ, Hu SH, Shen Y, Yang WZ: Some new results for weakly dependent random variable sequences. Chinese J. Appl. Probab. Statist. 2010, 26(6):637–648.

10. Cai GH:Strong law of large numbers for ${\rho }^{\ast }$-mixing sequences with different distributions. Discrete Dyn. Nat. Soc. 2006., 2006: Article ID 27648

11. Kuczmaszewska A: On complete convergence for arrays of rowwise dependent random variables. Stat. Probab. Lett. 2007, 77(11):1050–1060. 10.1016/j.spl.2006.12.007

12. Zhu MH:Strong laws of large numbers for arrays of rowwise ${\rho }^{\ast }$-mixing random variables. Discrete Dyn. Nat. Soc. 2007., 2007: Article ID 74296

13. An J, Yuan DM:Complete convergence of weighted sums for ${\rho }^{\ast }$-mixing sequence of random variables. Stat. Probab. Lett. 2008, 78(12):1466–1472. 10.1016/j.spl.2007.12.020

14. Wang XJ, Li XQ, Yang WZ, Hu SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 2012, 25(11):1916–1920. 10.1016/j.aml.2012.02.069

15. Peligrad M: Maximum of partial sums and an invariance principle for a class of weak dependent random variables. Proc. Am. Math. Soc. 1998, 126(4):1181–1189. 10.1090/S0002-9939-98-04177-X

16. Wu QY, Jiang YY:Some strong limit theorems for weighted product sums of $\stackrel{˜}{\rho }$-mixing sequences of random variables. J. Inequal. Appl. 2009., 2009: Article ID 174768

17. Wu QY, Jiang YY:Chover-type laws of the k-iterated logarithm for $\stackrel{˜}{\rho }$-mixing sequences of random variables. J. Math. Anal. Appl. 2010, 366: 435–443. 10.1016/j.jmaa.2009.12.059

18. Wu QY:Further study strong consistency of estimator in linear model for $\stackrel{˜}{\rho }$-mixing samples. J. Syst. Sci. Complex. 2011, 24: 969–980. 10.1007/s11424-011-8407-7

19. Wang XJ, Xia FX, Ge MM, Hu SH, Yang WZ:Complete consistency of the estimator of nonparametric regression models based on $\stackrel{˜}{\rho }$-mixing sequences. Abstr. Appl. Anal. 2012., 2012: Article ID 907286

20. Wu YF, Wang CH, Volodin A:Limiting behavior for arrays of rowwise ${\rho }^{\ast }$-mixing random variables. Lith. Math. J. 2012, 52(2):214–221. 10.1007/s10986-012-9168-2

21. Guo ML, Zhu DJ:Equivalent conditions of complete moment convergence of weighted sums for ${\rho }^{\ast }$-mixing sequence of random variables. Stat. Probab. Lett. 2013, 83: 13–20. 10.1016/j.spl.2012.08.015

22. Cheng PE: A note on strong convergence rates in nonparametric regression. Stat. Probab. Lett. 1995, 24: 357–364. 10.1016/0167-7152(94)00195-E

23. Bai ZD, Cheng PE, Zhang CH: An extension of the Hardy-Littlewood strong law. Stat. Sin. 1997, 7: 923–928.

24. Cuzick J: A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 1995, 8: 625–641. 10.1007/BF02218047

25. Bai ZD, Cheng PE: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 2000, 46: 105–112. 10.1016/S0167-7152(99)00093-0

26. Sung SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 2011, 52: 447–454. 10.1007/s00362-009-0241-9

27. Sung SH:On the strong convergence for weighted sums of ${\rho }^{\ast }$-mixing random variables. Stat. Pap. 2012. 10.1007/s00362-012-0461-2

28. Cai GH: Strong laws for weighted sums of NA random variables. Metrika 2008, 68: 323–331. 10.1007/s00184-007-0160-5

29. Jing BY, Liang HY: Strong limit theorems for weighted sums of negatively associated random variables. J. Theor. Probab. 2008, 21: 890–909. 10.1007/s10959-007-0128-4

30. Zhou XC, Tan CC, Lin JG:On the strong laws for weighted sums of ${\rho }^{\ast }$-mixing random variables. J. Inequal. Appl. 2011., 2011: Article ID 157816

31. Wang XJ, Hu SH, Volodin AI: Strong limit theorems for weighted sums of NOD sequence and exponential inequalities. Bull. Korean Math. Soc. 2011, 48(5):923–938. 10.4134/BKMS.2011.48.5.923

32. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise negatively orthant dependent random variables. RACSAM Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. a Mat. 2012, 106(2):235–245. 10.1007/s13398-011-0048-0

33. Wang XJ, Hu SH, Yang WZ, Wang XH: On complete convergence of weighted sums for arrays of rowwise asymptotically almost negatively associated random variables. Abstr. Appl. Anal. 2012., 2012: Article ID 315138

34. Wu QY, Chen PY: An improved result in almost sure central limit theorem for self-normalized products of partial sums. J. Inequal. Appl. 2013., 2013: Article ID 129

## Acknowledgements

The authors are most grateful to the editor and anonymous referee for careful reading of the manuscript and valuable suggestions which helped in improving an earlier version of this paper. This work was supported by the National Natural Science Foundation of China (11201001, 11171001), the Natural Science Foundation of Anhui Province (1308085QA03, 11040606M12, 1208085QA03), the 211 project of Anhui University, the Youth Science Research Fund of Anhui University, Applied Teaching Model Curriculum of Anhui University (XJYYXKC04) and the Students Science Research Training Program of Anhui University (KYXL2012007, kyxl2013003).

## Author information

Authors

### Corresponding author

Correspondence to Aiting Shen.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Shen, A., Wu, R. Strong convergence results for weighted sums of $\stackrel{˜}{\rho }$-mixing random variables. J Inequal Appl 2013, 327 (2013). https://doi.org/10.1186/1029-242X-2013-327