# On the complete convergence for arrays of rowwise ψ-mixing random variables

## Abstract

Some sufficient conditions for complete convergence for maximal weighted sums ${max}_{1\le j\le n}|{\sum }_{k=1}^{j}{a}_{nk}{X}_{nk}|$ and weighted sums ${\sum }_{k=1}^{n}{a}_{nk}{X}_{nk}$ are presented, where $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ is an array of rowwise ψ-mixing random variables, and $\left\{{a}_{nk},1\le k\le n,n\ge 1\right\}$ is an array of constants. The obtained results extend and improve the corresponding result in the previous literature.

MSC:60F15.

## 1 Introduction

The following notion was given firstly by Hsu and Robbins [1].

Definition 1.1 A sequence of random variables $\left\{{U}_{n},n\ge 1\right\}$ is said to converge completely to a constant θ if for any $\epsilon >0$,

$\sum _{n=1}^{\mathrm{\infty }}P\left(|{U}_{n}-\theta |>\epsilon \right)<\mathrm{\infty }.$

In this case, we write ${U}_{n}\to \theta$ completely. In view of the Borel-Cantelli lemma, the result above implies that ${U}_{n}\to \theta$ almost surely. Therefore, the complete convergence is a very important tool in establishing almost sure convergence of summation of random variables as well as weighted sums of random variables.

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables, defined on a probability space $\left(\mathrm{\Omega },\mathcal{F},P\right)$, and denote σ-algebras

${\mathcal{F}}_{n}^{m}=\sigma \left({X}_{k},n\le k\le m\right),\phantom{\rule{1em}{0ex}}1\le n\le m\le \mathrm{\infty }.$

As usual, for a σ-algebra , we denote by ${\mathcal{L}}^{2}\left(\mathcal{F}\right)$ the class of all -measurable random variables with the finite second moment. Given σ-algebras , in , let

$\begin{array}{c}\psi \left(\mathcal{A},\mathcal{B}\right)=\underset{A\in \mathcal{A},B\in \mathcal{B},P\left(A\right)P\left(B\right)>0}{sup}|\frac{P\left(AB\right)}{P\left(A\right)P\left(B\right)}-1|,\hfill \\ \phi \left(\mathcal{A},\mathcal{B}\right)=\underset{A\in \mathcal{A},B\in \mathcal{B},P\left(A\right)>0}{sup}|P\left(B|A\right)-P\left(B\right)|.\hfill \end{array}$

Define the mixing coefficients by

$\psi \left(n\right)=\underset{k\ge 1}{sup}\psi \left({\mathcal{F}}_{1}^{k},{\mathcal{F}}_{k+n}^{\mathrm{\infty }}\right),\phantom{\rule{2em}{0ex}}\phi \left(n\right)=\underset{k\ge 1}{sup}\phi \left({\mathcal{F}}_{1}^{k},{\mathcal{F}}_{k+n}^{\mathrm{\infty }}\right),\phantom{\rule{1em}{0ex}}n\ge 0.$

The concepts of ψ-mixing and φ-mixing random variables were introduced by Blum et al. [2] and Dobrushin [3], respectively.

Definition 1.2 A sequence of random variables $\left\{{X}_{n},n\ge 1\right\}$ is said to be a ψ-mixing (φ-mixing) sequence of random variables if $\psi \left(n\right)↓0$ ($\phi \left(n\right)↓0$) as $n\to \mathrm{\infty }$.

Clearly, from the definition above, we know that the independence implies ψ-mixture and φ-mixture. It is easily seen that the ψ-mixing condition is stronger than the φ-mixing. Therefore, the family of ψ-mixing is a special case of φ-mixing. Years after the appearance of Dobrushin [3], many works of investigation concerning the convergence properties of φ-mixing random variables have emerged. We refer the reader to Ibragimov [4], Cogburn [5], Sen [6], Choi and Sung [7], Utev [8], Chen [9], Shao [10], Rüdiger [11], Chen et al. [12], Zhou [13], Wang et al. [14, 15], Guo [16].

However, according to our knowledge, few papers discuss the subjects for sequences or arrays of ψ-mixing random variables except Blum et al. [2], Bradley [17], Yang [18], Wu and Zhu [19], Wang et al. [14, 15], and Yang and Liu [20]. The goal of this paper is to study a complete convergence for arrays of rowwise ψ-mixing random variables.

Then we recall that the following concept of stochastic domination is a slight generalization of identical distribution.

Definition 1.3 An array of rowwise random variables $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ is said to be stochastically dominated by a nonnegative random variable X (write $\left\{{X}_{nk}\right\}\prec X$) if there exists a constant $C>0$ such that

$\underset{n,k}{sup}P\left(|{X}_{nk}|>x\right)\le CP\left(X>x\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }x>0.$

Stochastic dominance of $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ by the random variable X implies that $E{|{X}_{nk}|}^{p}\le CE{X}^{p}$ if the p-moment of X exists, i.e., if $E{X}^{p}<\mathrm{\infty }$.

Hu et al. [21] obtained the following result in the complete convergence.

Theorem A Let $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ be an array of rowwise independent random variables with $E{X}_{nk}=0$. Suppose that $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ are uniformly bounded by some random variable X. If $E{|X|}^{2p}<\mathrm{\infty }$ for some $1\le p<2$, then

${n}^{-1/p}\sum _{k=1}^{n}{X}_{nk}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{completely}}.$

Taylor et al. [22], Baek et al. [23] extended and generalized Theorem A to rowwise negatively dependent (ND) random variables.

The main purpose of this article is to discuss the complete convergence for weighted sums of ψ-mixing random variables. We shall extend Theorem A by considering ψ-mixing instead of independent. It is worthy to point out that our main methods differ from those used by Hu et al. [21].

Below, C will be used to denote various positive constants, whose value may vary from one application to another. For a finite set A, the symbol $\mathrm{♯}\left(A\right)$ denotes the number of elements in the set A. ${I}_{\left(A\right)}$ will indicate the indicator function of A.

## 2 Main results and some lemmas

Now, we state our main results. The proofs will be given in Section 3.

Theorem 2.1 Let $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ be an array of rowwise ψ-mixing random variables with ${\sum }_{m=1}^{\mathrm{\infty }}\psi \left(m\right)<\mathrm{\infty }$ and $E{X}_{nk}=0$. Suppose that $\left\{{X}_{nk}\right\}\prec X$ and $E{X}^{2p}<\mathrm{\infty }$ for some $p>0$. Let $\left\{{a}_{nk},1\le k\le n,n\ge 1\right\}$ be a real numbers array satisfying ${max}_{1\le k\le n}|{a}_{nk}|=O\left({n}^{-\alpha }\right)$ for some $\alpha >1/\left(2p\right)$. Furthermore, when $p\ge 1$, we suppose that there exists a constant $\theta >0$ such that ${\sum }_{k=1}^{n}{a}_{nk}^{2}\le C{n}^{-\theta }$. Then

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|\sum _{k=1}^{j}{a}_{nk}{X}_{nk}|>\epsilon \right)<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathrm{\forall }\epsilon >0.$

Take ${a}_{nk}={n}^{-1/p}$ and $1\le p<2$ in Theorem 2.1, we can have the following corollary.

Corollary 2.1 Let $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ be an array of rowwise ψ-mixing random variables with ${\sum }_{m=1}^{\mathrm{\infty }}\psi \left(m\right)<\mathrm{\infty }$ and $E{X}_{nk}=0$. Suppose that $\left\{{X}_{nk}\right\}\prec X$ and $E{X}^{2p}<\mathrm{\infty }$ for some $1\le p<2$, then

$\sum _{n=1}^{\mathrm{\infty }}P\left(\underset{1\le j\le n}{max}|\sum _{k=1}^{j}{X}_{nk}|>{n}^{1/p}\epsilon \right)<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathrm{\forall }\epsilon >0.$

Remark 2.1 Since the independence implies ψ-mixture, Theorem 2.1 and Corollary 2.1 hold for arrays of rowwise independent random variables. Therefore, Theorem 2.1 and Corollary 2.1 extend and improve Theorem A.

Theorem 2.2 Let $\left\{{X}_{nk},1\le k\le n,n\ge 1\right\}$ be an array of rowwise ψ-mixing random variables with $E{X}_{nk}=0$. Suppose that $\left\{{X}_{nk}\right\}\prec X$ and $E{X}^{2p}<\mathrm{\infty }$ for some $p\ge 1$. Let $\left\{{a}_{nk},1\le k\le n,n\ge 1\right\}$ be a real numbers array satisfying ${max}_{1\le k\le n}|{a}_{nk}|=O\left({n}^{-\alpha }\right)$ for some $\alpha >1/\left(2p\right)$. Suppose that the following statements hold.

1. (i)

There exists a positive constant $\lambda such that ${\sum }_{n=1}^{\mathrm{\infty }}{\psi }^{\frac{\lambda }{1-\lambda }}\left(n\right)<\mathrm{\infty }$;

2. (ii)

$logn{\sum }_{k=1}^{n}{a}_{nk}^{2}=o\left(1\right)$ if $\frac{1}{2p}<\alpha \le \frac{1}{2}$. Then

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(|\sum _{k=1}^{n}{a}_{nk}{X}_{nk}|>\epsilon \right)<\mathrm{\infty },\phantom{\rule{1em}{0ex}}\mathrm{\forall }\epsilon >0.$

Remark 2.2 Compared with Theorem 2.1, Theorem 2.2 requires a stronger mixing rate, but weakens the requirement of ${\sum }_{k=1}^{n}{a}_{nk}^{2}$. In fact, $logn{\sum }_{k=1}^{n}{a}_{nk}^{2}=o\left(1\right)$ holds if ${\sum }_{k=1}^{n}{a}_{nk}^{2}\le C{n}^{-\theta }$, $\theta >0$.

Now, we state some lemmas which will be used in the proofs of our main results.

Lemma 2.1 (Wang et al. [14, 15])

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables satisfying ${\sum }_{m=1}^{\mathrm{\infty }}\psi \left(m\right)<\mathrm{\infty }$, $q\ge 2$. Assume that $E{X}_{n}=0$ and $E{|{X}_{n}|}^{q}<\mathrm{\infty }$ for each $n\ge 1$. Then there exists a constant C depending only on q and $\psi \left(\cdot \right)$ such that

$E\underset{1\le j\le n}{max}{|\sum _{k=1}^{j}{X}_{k}|}^{q}\le C\left\{\sum _{k=1}^{n}E{|{X}_{k}|}^{q}+{\left(\sum _{k=1}^{n}E{X}_{k}^{2}\right)}^{q/2}\right\}.$

Lemma 2.2 (Yang [18])

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables with $E{X}_{k}=0$, $|{X}_{k}|\le d<\mathrm{\infty }$ a.s., $k=1,2,\dots$ , $0<\lambda <1$, $m=\left[{n}^{\lambda }\right]$. Then $\mathrm{\forall }\epsilon >0$,

$P\left(|\sum _{k=1}^{n}{X}_{k}|>\epsilon \right)\le 2e{C}_{1}exp\left\{-t\epsilon +{C}_{2}{t}^{2}{B}_{n}\right\},$

where ${B}_{n}={\sum }_{k=1}^{n}E{X}_{k}^{2}$, $tmd\le 1/4$, ${C}_{1}=exp\left\{2e{n}^{1-\lambda }\psi \left(m\right)\right\}$, ${C}_{2}=4\left(1+4{\sum }_{k=1}^{2m}\psi \left(k\right)\right)$.

Lemma 2.3 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables, and let ${A}_{j}=\left\{|{X}_{j}|\ge {x}_{j}\right\}$, ${x}_{j}\in {\mathbb{R}}^{+}$, $j=1,2,\dots ,N$, then

$P\left({A}_{1},{A}_{2},\dots ,{A}_{N}\right)\le {\left(1+\psi \left(1\right)\right)}^{N}\prod _{j=1}^{N}P\left({A}_{j}\right).$

Proof By the definition of ψ-mixing, we have

$\begin{array}{rcl}P\left({A}_{1},{A}_{2},\dots ,{A}_{N}\right)& \le & \left(1+\psi \left(1\right)\right)P\left({A}_{1}\right)P\left({A}_{2},\dots ,{A}_{N}\right)\\ \le & \cdots \\ \le & {\left(1+\psi \left(1\right)\right)}^{N}P\left({A}_{1}\right)P\left({A}_{2}\right)\cdots P\left({A}_{N}\right).\end{array}$

The proof is complete. □

## 3 Proofs

In this section, we state the proofs of our main results.

Proof of Theorem 2.1 Let ${S}_{nj}={\sum }_{k=1}^{j}{a}_{nk}{X}_{nk}$, $1\le j\le n$. Since ${a}_{nk}={a}_{nk}^{+}-{a}_{nk}^{-}$, without loss of generality, we may assume that $0<{a}_{nk}\le C{n}^{-\alpha }$. Let $0<\rho <\frac{\left(2\alpha p-1\right)\left(N-1\right)}{2pN}$, where N is a positive integer with $N>1$. Let

$\begin{array}{c}{X}_{nk}^{\mathrm{\prime }}={X}_{nk}{I}_{\left({a}_{nk}|{X}_{nk}|\le {n}^{-\rho }\right)},\phantom{\rule{2em}{0ex}}{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }}={X}_{nk}{I}_{\left({a}_{nk}|{X}_{nk}|>\epsilon /N\right)},\hfill \\ {X}_{nk}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}={X}_{k}-{X}_{nk}^{\mathrm{\prime }}-{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }}={X}_{nk}{I}_{\left({n}^{-\rho }<{a}_{nk}|{X}_{nk}|\le \epsilon /N\right)},\hfill \\ {S}_{nj}^{\mathrm{\prime }}=\sum _{k=1}^{j}{a}_{nk}{X}_{nk}^{\mathrm{\prime }},\phantom{\rule{2em}{0ex}}{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}=\sum _{k=1}^{j}{a}_{nk}{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }},\phantom{\rule{2em}{0ex}}{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}=\sum _{k=1}^{j}{a}_{nk}{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}.\hfill \end{array}$

Firstly, we prove ${\sum }_{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left({max}_{1\le j\le n}|{S}_{nj}^{\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }$. By $\left\{{X}_{n}\right\}\prec X$, we know that $E{|{X}_{nk}|}^{2p}\le E{X}^{2p}<\mathrm{\infty }$. If $0, we have

$\begin{array}{rl}\underset{1\le j\le n}{max}|\sum _{k=1}^{j}{a}_{nk}E{X}_{nk}^{\mathrm{\prime }}|& \le \sum _{k=1}^{n}{a}_{nk}|E{X}_{nk}^{\mathrm{\prime }}|\le \sum _{k=1}^{n}{a}_{nk}E|{X}_{nk}|{I}_{\left({a}_{nk}|{X}_{nk}|\le {n}^{-\rho }\right)}\\ \le \sum _{k=1}^{n}\frac{{a}_{nk}^{2p}E{|{X}_{nk}|}^{2p}}{{n}^{-2\rho p}}{n}^{-\rho }\le C{n}^{\rho \left(2p-1\right)-\left(2\alpha p-1\right)}\to 0,\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty }.\end{array}$
(1)

If $p>1/2$, by $E{X}_{nk}=0$ and $\rho <\frac{2\alpha p-1}{2p}<\frac{2\alpha p-1}{2p-1}$, we also have

$\begin{array}{rcl}\underset{1\le j\le n}{max}|\sum _{k=1}^{j}{a}_{nk}E{X}_{nk}^{\mathrm{\prime }}|& \le & \sum _{k=1}^{n}{a}_{nk}|E{X}_{nk}^{\mathrm{\prime }}|\le \sum _{k=1}^{n}{a}_{nk}E|{X}_{nk}|{I}_{\left({a}_{nk}|{X}_{nk}|>{n}^{-\rho }\right)}\\ \le & \sum _{k=1}^{n}\frac{{a}_{nk}^{2p}E{|{X}_{nk}|}^{2p}}{{n}^{-2\rho p}}{n}^{-\rho }\le C{n}^{\rho \left(2p-1\right)-\left(2\alpha p-1\right)}\to 0,\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty }.\end{array}$

Therefore, we know that (1) holds for $p>0$. Let ${S}_{nj}^{\ast }={\sum }_{k=1}^{j}{a}_{nk}\left({X}_{nk}^{\mathrm{\prime }}-E{X}_{nk}^{\mathrm{\prime }}\right)$. To prove ${\sum }_{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left({max}_{1\le j\le n}|{S}_{nj}^{\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }$, it suffices to show that ${\sum }_{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left({max}_{1\le j\le n}|{S}_{nj}^{\ast }|>\epsilon \right)<\mathrm{\infty }$.

If $0, by Markov’s inequality and Lemma 2.1, we have

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\ast }|>\epsilon \right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}E{\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\ast }|\right)}^{2}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}\sum _{k=1}^{n}{a}_{nk}^{2}E{\left({X}_{nk}^{\mathrm{\prime }}\right)}^{2}\\ \phantom{\rule{1em}{0ex}}\le \sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}\sum _{k=1}^{n}{a}_{nk}^{2p}E\left({|{a}_{nk}{X}_{nk}^{\mathrm{\prime }}|}^{2-2p}{|{X}_{nk}^{\mathrm{\prime }}|}^{2p}\right)\\ \phantom{\rule{1em}{0ex}}\le \sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}{n}^{-\rho \left(2-2p\right)}\sum _{k=1}^{n}{a}_{nk}^{2p}E{|{X}_{nk}^{\mathrm{\prime }}|}^{2p}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1-2\rho \left(1-p\right)}<\mathrm{\infty }.\end{array}$
(2)

If $p\ge 1$, take $q>max\left\{2p,2\left(2\alpha p-1\right)/\theta \right\}$. By $q>2$ and Lemma 2.1, we have

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\ast }|>\epsilon \right)\\ \phantom{\rule{1em}{0ex}}\le \sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}\left[\sum _{k=1}^{n}{a}_{nk}^{q}E{|{X}_{nk}^{\mathrm{\prime }}|}^{q}+{\left(\sum _{k=1}^{n}{a}_{nk}^{2}E{\left({X}_{nk}^{\mathrm{\prime }}\right)}^{2}\right)}^{q/2}\right].\end{array}$
(3)

By a similar argument as in the proof of (2) (replacing exponent 2 into q), we can get

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}\sum _{k=1}^{n}{a}_{nk}^{q}E{|{X}_{nk}^{\mathrm{\prime }}|}^{q}<\mathrm{\infty }.$
(4)

Note that $E{|{X}_{nk}^{\mathrm{\prime }}|}^{2}\le E{X}^{2}<\mathrm{\infty }$ and the definition of q, we have

$\begin{array}{r}\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}{\left(\sum _{k=1}^{n}{a}_{nk}^{2}E{\left({X}_{nk}^{\mathrm{\prime }}\right)}^{2}\right)}^{q/2}\\ \phantom{\rule{1em}{0ex}}\le \sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}{\left(\sum _{k=1}^{n}{a}_{nk}^{2}\right)}^{q/2}{\left(E{X}^{2}\right)}^{q/2}\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1+\left(2\alpha p-1\right)-\theta q/2}{\left(E{X}^{2}\right)}^{q/2}<\mathrm{\infty }.\end{array}$
(5)

From (3)-(5), we know that (2) still holds for $p\ge 1$. By (1) and (2), we have

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }.$

Secondly, we prove ${\sum }_{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left({max}_{1\le j\le n}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }$. Let ${\phi }_{n}\left(j\right)=\mathrm{♯}\left\{1\le k\le n:{a}_{nk}>\epsilon /\left(jN\right)\right\}$ and ${\varphi }_{j}=\left[{\left(jCN/\epsilon \right)}^{1/\alpha }\right]$, then

$\begin{array}{rcl}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)& \le & P\left(\bigcup _{k=1}^{n}\left\{{a}_{nk}|{X}_{nk}|>\epsilon /N\right\}\right)\\ \le & \sum _{k=1}^{n}P\left({a}_{nk}|{X}_{nk}|>\epsilon /N\right)\le C\sum _{k=1}^{n}P\left({a}_{nk}X>\epsilon /N\right)\\ =& C\sum _{j=1}^{\mathrm{\infty }}\sum _{k=1}^{n}P\left({a}_{nk}X>\epsilon /N,j-1\le X
(6)

By ${a}_{nk}X>\epsilon /N$, we know ${a}_{nk}>\epsilon /\left(jN\right)$. Note ${a}_{nk}\le C{n}^{-\alpha }$, we have $n<{\left(jCN/\epsilon \right)}^{1/\alpha }$. Hence, we have $n\le {\varphi }_{j}$, then

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)& \le & C\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}\sum _{j=1}^{\mathrm{\infty }}{\phi }_{n}\left(j\right)P\left(j-1\le X
(7)

Take $v\in \left(0,\frac{2\alpha p-1}{\alpha }\right)$, then ${\sum }_{k=1}^{n}{a}_{nk}^{v}\ge {\phi }_{n}\left(j\right){\epsilon }^{v}/{\left(jN\right)}^{v}$. From ${\sum }_{k=1}^{n}{a}_{nk}^{v}\le C{n}^{1-\alpha v}$, we have

${\phi }_{n}\left(j\right)\le {N}^{v}{j}^{v}/{\epsilon }^{v}\sum _{k=1}^{n}{a}_{nk}^{v}\le C{n}^{1-\alpha v}{j}^{v}.$
(8)

Therefore, by (7) and (8), we have

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)\le C\sum _{j=1}^{\mathrm{\infty }}\sum _{n=1}^{{\varphi }_{j}}{n}^{2\alpha p-\alpha v-1}{j}^{v}P\left(j-1\le X
(9)

By the definition of v, we have $2\alpha p-\alpha v-1>0$, then

$\sum _{n=1}^{{\varphi }_{j}}{n}^{2\alpha p-\alpha v-1}\le {\varphi }_{j}^{\alpha \left(2p-v\right)}\le C{j}^{2p-v}.$
(10)

From (9), (10) and $E{X}^{2p}<\mathrm{\infty }$, then

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)\le C\sum _{j=1}^{\mathrm{\infty }}{j}^{2p}P\left(j-1\le X

Finally, we prove ${\sum }_{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left({max}_{1\le j\le n}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }$. Obviously, we know that $P\left({max}_{1\le j\le n}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)\le P\left({\sum }_{k=1}^{n}{a}_{nk}|{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)$. Let $M=\mathrm{♯}\left\{1\le k\le n:{n}^{-\rho }<{a}_{nk}|{X}_{nk}|\le \epsilon /N\right\}$. We must let M be at least N such that ${\sum }_{k=1}^{n}{a}_{nk}|{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon$. Take ${W}_{k}=\left\{{a}_{nk}|{X}_{nk}|>{n}^{-\rho }\right\}$, we have

(11)

By Lemma 2.3, we have

$P\left({W}_{{i}_{1}}{W}_{{i}_{2}}\cdots {W}_{{i}_{N}}\right)\le {\left(1+\psi \left(1\right)\right)}^{N}P\left({W}_{{i}_{1}}\right)P\left({W}_{{i}_{2}}\right)\cdots P\left({W}_{{i}_{N}}\right).$
(12)

From (11) and (12), we have

$\begin{array}{r}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)\\ \phantom{\rule{1em}{0ex}}\le {\left(1+\psi \left(1\right)\right)}^{N}{n}^{2\alpha p-2}\sum _{1\le {i}_{1}<{i}_{2}<\cdots <{i}_{N}\le n}P\left({W}_{{i}_{1}}\right)P\left({W}_{{i}_{2}}\right)\cdots P\left({W}_{{i}_{N}}\right)\\ \phantom{\rule{1em}{0ex}}\le C{n}^{2\alpha p-2}\left(\begin{array}{c}n\\ N\end{array}\right){P}^{N}\left(X>{a}_{nk}^{-1}{n}^{-\rho }\right)\\ \phantom{\rule{1em}{0ex}}\le C{n}^{2\alpha p-2}{n}^{N}{P}^{N}\left(X>{C}^{-1}{n}^{-\rho +\alpha }\right)\\ \phantom{\rule{1em}{0ex}}\le C{n}^{-1-\left(2\alpha p-1\right)\left(N-1\right)+2\rho pN}{\left(E{X}^{2p}\right)}^{N}.\end{array}$
(13)

Noting that $0<\rho <\frac{\left(2\alpha p-1\right)\left(N-1\right)}{2pN}$. We have

$-\left(2\alpha p-1\right)\left(N-1\right)+2\rho pN<0,$

then

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{-1-\left(2\alpha p-1\right)\left(N-1\right)+2\rho pN}<\mathrm{\infty }.$

The proof is completed. □

Proof of Theorem 2.2 Following the notations of ${X}_{nk}^{\mathrm{\prime }}$, ${X}_{nk}^{\mathrm{\prime }\mathrm{\prime }}$ and ${X}_{nk}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}$, but let

$\begin{array}{c}{T}_{n}^{\mathrm{\prime }}=\sum _{k=1}^{n}{a}_{nk}{X}_{nk}^{\mathrm{\prime }},\phantom{\rule{2em}{0ex}}{T}_{n}^{\mathrm{\prime }\mathrm{\prime }}=\sum _{k=1}^{n}{a}_{nk}{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }},\hfill \\ {T}_{n}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}=\sum _{k=1}^{n}{a}_{nk}{X}_{nk}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }},\phantom{\rule{2em}{0ex}}{T}_{n}^{\ast }=\sum _{k=1}^{n}{a}_{nk}\left({X}_{nk}^{\mathrm{\prime }}-E{X}_{nk}^{\mathrm{\prime }}\right).\hfill \end{array}$

Obviously, by following the methods used in the proof of (1), we have

$|\sum _{k=1}^{n}{a}_{nk}E{X}_{nk}^{\mathrm{\prime }}|\to 0,\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty }.$
(14)

By similar arguments as in the proofs of

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(\underset{1\le j\le n}{max}|{S}_{nj}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty },$

we can prove

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(|{T}_{n}^{\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(|{T}_{n}^{\mathrm{\prime }\mathrm{\prime }\mathrm{\prime }}|>\epsilon \right)<\mathrm{\infty }.$

Here, we omit the details. Therefore, we need only to show

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(|{T}_{n}^{\ast }|>\epsilon \right)<\mathrm{\infty }.$

Take $\lambda <\rho <\frac{\left(2\alpha p-1\right)\left(N-1\right)}{2pN}$, by $0<\lambda <\frac{1}{2p}$ and $p\ge 1$, we know $0<\frac{\lambda }{1-\lambda }<1$. Hence, from condition (i), we have

${C}_{2}=4\left(1+4\sum _{k=1}^{2m}\psi \left(k\right)\right)\le 4\left(1+4\sum _{k=1}^{\mathrm{\infty }}\psi \left(k\right)\right)<\mathrm{\infty },\phantom{\rule{2em}{0ex}}\psi \left(m\right)=o\left({m}^{\frac{\lambda -1}{\lambda }}\right),$

where $m=\left[{n}^{\lambda }\right]$. Therefore, ${C}_{1}=exp\left\{2e{n}^{1-\lambda }\psi \left(m\right)\right\}\ll exp\left\{2e\right\}<\mathrm{\infty }$.

Take $t=\frac{2\alpha plogn}{\epsilon }$. Clearly, $t\le {n}^{\rho -\lambda }/8$ when n is sufficiently large. Note that $|{a}_{nk}\left({X}_{nk}^{\mathrm{\prime }}-E{X}_{nk}^{\mathrm{\prime }}\right)|\le 2{n}^{-\rho }=d$, then $tmd\le 1/4$ when n is sufficiently large. By Lemma 2.2, we have

$\begin{array}{rcl}P\left(|{T}_{n}^{\ast }|>\epsilon \right)& \le & Cexp\left\{-t\epsilon +{C}_{0}{C}_{2}{t}^{2}\sum _{k=1}^{n}{a}_{nk}^{2}\right\}\\ =& Cexp\left\{-2\alpha plogn+{C}_{0}{C}_{2}\frac{{\left(2\alpha p\right)}^{2}{log}^{2}n}{{\epsilon }^{2}}\sum _{k=1}^{n}{a}_{nk}^{2}\right\},\end{array}$
(15)

where ${C}_{0}=E{\left({X}_{nk}^{\mathrm{\prime }}\right)}^{2}\le E{X}^{2p}<\mathrm{\infty }$. Note that $logn{\sum }_{k=1}^{n}{a}_{nk}^{2}=o\left(1\right)$ holds if $\alpha >\frac{1}{2}$. Therefore, by condition (ii), we have

${C}_{0}{C}_{2}\frac{{\left(2\alpha p\right)}^{2}logn}{{\epsilon }^{2}}\sum _{k=1}^{n}{a}_{nk}^{2}\to 0,\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty }.$
(16)

Hence, when n is sufficiently large, by (14) and (15), we have

$P\left(|{T}_{n}^{\ast }|>\epsilon \right)\le Cexp\left(-2\alpha plogn+{2}^{-1}logn\right)=C{n}^{-2\alpha p+1/2}.$

Then

$\sum _{n=1}^{\mathrm{\infty }}{n}^{2\alpha p-2}P\left(|{T}_{n}^{\ast }|>\epsilon \right)\le C\sum _{n=1}^{\mathrm{\infty }}{n}^{-\frac{3}{2}}<\mathrm{\infty }.$

The proof is completed. □

## References

1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

2. Blum JR, Hanson DL, Koopmans LH: On the strong law of large numbers for a class of stochastic processes. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1963, 2: 1–11. 10.1007/BF00535293

3. Dobrushin RL: The central limit theorem for non-stationary Markov chain. Theory Probab. Appl. 1956, 1: 72–88.

4. Ibragimov IA: Some limit theorems for stochastic processes stationary in the strict sense. Dokl. Akad. Nauk SSSR 1959, 125: 711–714.

5. Cogburn R Univ. Calif. Publ. Statist. 3. Asymptotic Properties of Stationary Sequences 1960, 99–146.

6. Sen PK: A note on weak convergence of empirical processes for sequences of φ -mixing random variables. Ann. Math. Stat. 1971, 42: 2131–2133. 10.1214/aoms/1177693079

7. Choi BD, Sung SH: Almost sure convergence theorems of weighted sums of random variables. Stoch. Anal. Appl. 1987, 5: 365–377. 10.1080/07362998708809124

8. Utev SA: The central limit theorem for φ -mixing arrays of random variables. Theory Probab. Appl. 1990, 35: 131–139. 10.1137/1135013

9. Chen DC: A uniform central limit theorem for nonuniform φ -mixing random fields. Ann. Probab. 1991, 19: 636–649. 10.1214/aop/1176990445

10. Shao QM: Almost sure invariance principles for mixing sequences of random variables. Stoch. Process. Appl. 1993, 48: 319–334. 10.1016/0304-4149(93)90051-5

11. Rüdiger K: Strong laws and summability for sequences of φ -mixing random variables in Banach spaces. Electron. Commun. Probab. 1997, 2: 27–41.

12. Chen PY, Hu TC, Volodin A: Limiting behaviour of moving average processes under φ -mixing assumption. Stat. Probab. Lett. 2009, 79: 105–111. 10.1016/j.spl.2008.07.026

13. Zhou XC: Complete moment convergence of moving average processes under φ -mixing assumptions. Stat. Probab. Lett. 2010, 80: 285–292. 10.1016/j.spl.2009.10.018

14. Wang XJ, Hu SH, Shen Y, Yang WZ: Maximal inequality for ψ -mixing sequences and its applications. Appl. Math. Lett. 2010, 23: 1156–1161. 10.1016/j.aml.2010.04.010

15. Wang XJ, Hu SH, Yang WZ, Shen Y: On complete convergence for weighted sums of φ -mixing random variables. J. Inequal. Appl. 2010. 10.1155/2010/372390

16. Guo ML: Complete moment convergence of weighted sums for arrays of rowwise φ -mixing random variables. Int. J. Math. Math. Sci. 2012., 2012: Article ID 730962

17. Bradley RC: On the ψ -mixing condition for stationary random sequences. Trans. Am. Math. Soc. 1983, 276: 55–66.

18. Yang SC: Almost sure convergence of weighted sums of mixing sequences. J. Syst. Sci. Math. Sci. 1995, 15(3):254–265. (in Chinese)

19. Wu YF, Zhu DJ: Complete convergence of weighted sum of ψ -mixing random sequences. J. Syst. Sci. Math. Sci. 2010, 30: 296–302. (in Chinese)

20. Yang YZ, Liu YY: Strong stability of linear forms of ψ -mixing random variables. Chin. J. Appl. Probab. Stat. 2011, 27: 337–345.

21. Hu TC, Móricz F, Taylor RL: Strong laws of large numbers for arrays of rowwise independent random variables. Acta Math. Hung. 1989, 54: 153–162. 10.1007/BF01950716

22. Taylor RL, Patterson RF, Bozorgnia A: A strong law of large numbers for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 2002, 20(3):643–656. 10.1081/SAP-120004118

23. Baek JI, Seo HY, Lee GH, Choi JY: On the strong law of large numbers for weighted sums of arrays of rowwise negatively dependent random variables. J. Korean Math. Soc. 2009, 46(4):827–840. 10.4134/JKMS.2009.46.4.827

## Acknowledgements

The authors are grateful to the referees for carefully reading the manuscript and for providing some comments and suggestions, which led to improvements in the paper. The research of Yong-Feng Wu was supported by the Humanities and Social Sciences Foundation for the Youth Scholars of Ministry of Education of China (12YJCZH217) and the Natural Science Foundation of Anhui Province (1308085MA03, 1208085MG121). The research of Hui Ding was supported by the NSF of Education Ministry of Anhui province (KJ2012Z278) and the National Statistics Science Research Project (2012LY153).

## Author information

Authors

### Corresponding author

Correspondence to Hui Ding.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

YFW carried out the proofs of the main results in the manuscript. HD participated in the design of the study and drafted the manuscript. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Wu, YF., Ding, H. On the complete convergence for arrays of rowwise ψ-mixing random variables. J Inequal Appl 2013, 393 (2013). https://doi.org/10.1186/1029-242X-2013-393