# Strong convergence properties for ψ-mixing random variables

## Abstract

In this paper, by using the Rosenthal-type maximal inequality for ψ-mixing random variables, we obtain the Khintchine-Kolmogorov-type convergence theorem, which can be applied to establish the three series theorem and the Chung-type strong law of large numbers for ψ-mixing random variables. In addition, the strong stability for weighted sums of ψ-mixing random variables is studied, which generalizes the corresponding one of independent random variables.

MSC:60F15.

## 1 Introduction

Let $\left(\mathrm{\Omega },\mathcal{F},P\right)$ be a fixed probability space. The random variables we deal with are all defined on $\left(\mathrm{\Omega },\mathcal{F},P\right)$. Throughout the paper, let $I\left(A\right)$ be the indicator function of the set A. For random variable X, denote ${X}^{\left(c\right)}=XI\left(|X|\le c\right)$ for some $c>0$. Denote ${log}^{+}x\doteq lnmax\left(e,x\right)$. C and c denote positive constants, which may be different in various places.

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables defined on a fixed probability space $\left(\mathrm{\Omega },\mathcal{F},P\right)$, and let ${S}_{n}={\sum }_{i=1}^{n}{X}_{i}$ for each $n\ge 1$. Let n and m be positive integers. Write ${\mathcal{F}}_{n}^{m}=\sigma \left({X}_{i},n\le i\le m\right)$. Given σ-algebras , in , let

$\psi \left(\mathcal{B},\mathcal{R}\right)=\underset{A\in \mathcal{B},B\in \mathcal{R},P\left(A\right)P\left(B\right)>0}{sup}\frac{|P\left(AB\right)-P\left(A\right)P\left(B\right)|}{P\left(A\right)P\left(B\right)}.$
(1.1)

Define the mixing coefficients by

$\psi \left(n\right)=\underset{k\ge 1}{sup}\psi \left({\mathcal{F}}_{1}^{k},{\mathcal{F}}_{k+n}^{\mathrm{\infty }}\right),\phantom{\rule{1em}{0ex}}n\ge 0.$

Definition 1.1 A sequence $\left\{{X}_{n},n\ge 1\right\}$ of random variables is said to be a sequence of ψ-mixing random variables if $\psi \left(n\right)↓0$ as $n\to \mathrm{\infty }$.

The concept of ψ-mixing random variables was introduced by Blum et al. [1] and some applications have been found. See, for example, Blum et al. [1] for strong law of large numbers, Yang [2] for almost sure convergence of weighted sums, Wu [3] for strong consistency of M estimator in linear model, Wang et al. [4] for maximal inequality and Hájek-Rényi-type inequality, strong growth rate and the integrability of the supremum, Zhu et al. [5] for strong convergence properties, Pan et al. [6] for strong convergence of weighted sums, and so on. When these are compared with the corresponding results of independent random variable sequences, there still remains much to be desired. The main purpose of this paper is to establish the Khintchine-Kolmogorov-type convergence theorem, which can be applied to obtain the three series theorem and the Chung-type strong law of large numbers for ψ-mixing random variables. In addition, we will study the strong stability for weighted sums of ψ-mixing random variables, which generalizes the corresponding one of independent random variables.

For independent and identically distributed random variable sequences, Jamison et al. [7] proved the following theorem.

Theorem A Let $\left\{X,{X}_{n},n\ge 1\right\}$ be an independent and identically distributed sequence with the same distribution function $F\left(x\right)$, and let $\left\{{w}_{n},n\ge 1\right\}$ be a sequence of positive numbers. Write ${W}_{n}={\sum }_{i=1}^{n}{\omega }_{n}$ and $N\left(x\right)=Card\left\{n:{W}_{n}/{\omega }_{n}\le x\right\}$, $x>0$. If

1. (i)

${W}_{n}\to \mathrm{\infty }$ and ${\omega }_{n}{W}_{n}^{-1}\to 0$ as $n\to \mathrm{\infty }$,

2. (ii)

$E|X|<\mathrm{\infty }$ and $EN\left(|X|\right)<\mathrm{\infty }$,

3. (iii)

${\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{x}^{2}\left({\int }_{y\ge |x|}N\left(y\right)/{y}^{3}\phantom{\rule{0.2em}{0ex}}dy\right)\phantom{\rule{0.2em}{0ex}}dF\left(x\right)<\mathrm{\infty }$,

then

${W}_{n}^{-1}\sum _{i=1}^{n}{\omega }_{i}{X}_{i}\to c\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}},$
(1.2)

where c is a constant.

The result of Theorem A for independent and identically distributed sequences has been generalized to some dependent sequences, such as negatively associated sequences, negatively superadditive dependent sequences, $\stackrel{˜}{\rho }$-mixing sequences, $\stackrel{˜}{\phi }$-mixing sequences, and so forth. We will further study the strong stability for weighted sums of ψ-mixing random variables, which generalizes corresponding one of independent sequences. The main results of the paper depend on the following important lemma - Rosenthal-type maximal inequality for ψ-mixing random variables.

Lemma 1.1 (cf. Wang et al. [4])

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables satisfying ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$, $q\ge 2$. Assume that $E{X}_{n}=0$ and $E{|{X}_{n}|}^{q}<\mathrm{\infty }$ for each $n\ge 1$. Then there exists a constant C depending only on q and $\psi \left(\cdot \right)$ such that

$E\left(\underset{1\le j\le n}{max}|\sum _{i=a+1}^{a+j}{X}_{i}{|}^{q}\right)\le C\left[\sum _{i=a+1}^{a+n}E{|{X}_{i}|}^{q}+{\left(\sum _{i=a+1}^{a+n}E{X}_{i}^{2}\right)}^{q/2}\right]$
(1.3)

for every $a\ge 0$ and $n\ge 1$. In particular, we have

$E\left(\underset{1\le j\le n}{max}|\sum _{i=1}^{j}{X}_{i}{|}^{q}\right)\le C\left[\sum _{i=1}^{n}E{|{X}_{i}|}^{q}+{\left(\sum _{i=1}^{n}E{X}_{i}^{2}\right)}^{q/2}\right]$
(1.4)

for every $n\ge 1$.

The following concept of stochastic domination will be used frequently throughout the paper.

Definition 1.2 A sequence $\left\{{X}_{n},n\ge 1\right\}$ of random variables is said to be stochastically dominated by a random variable X if there exists a constant C such that

$P\left(|{X}_{n}|>x\right)\le CP\left(|X|>x\right)$
(1.5)

for all $x\ge 0$ and $n\ge 1$.

By the definition of stochastic domination and integration by parts, we can get the following basic property for stochastic domination. For the proof, one can refer to Wang et al. [8], Tang [9] or Shen and Wu [10].

Lemma 1.2 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of random variables, which is stochastically dominated by a random variable X. For any $\alpha >0$ and $b>0$, the following statement holds

$E{|{X}_{n}|}^{\alpha }I\left(|{X}_{n}|\le b\right)\le C\left\{E{|X|}^{\alpha }I\left(|X|\le b\right)+{b}^{\alpha }P\left(|X|>b\right)\right\},$

where C is a positive constant.

## 2 Khintchine-Kolmogorov-type convergence theorem

In this section, we will prove the Khintchine-Kolmogorov-type convergence theorem for ψ-mixing random variables. By using the Khintchine-Kolmogorov-type convergence theorem, we can get the three series theorem and the Chung-type strong law of large numbers for ψ-mixing random variables.

Theorem 2.1 (Khintchine-Kolmogorov-type convergence theorem)

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables satisfying ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. Assume that

$\sum _{n=1}^{\mathrm{\infty }}Var\left({X}_{n}\right)<\mathrm{\infty },$
(2.1)

then ${\sum }_{n=1}^{\mathrm{\infty }}\left({X}_{n}-E{X}_{n}\right)$ converges a.s.

Proof Without loss of generality, we assume that $E{X}_{n}=0$ for all $n\ge 1$. For any $\epsilon >0$, it can be checked that

$\begin{array}{rcl}P\left(\underset{k,m\ge n}{sup}|{S}_{k}-{S}_{m}|>\epsilon \right)& \le & P\left(\underset{k\ge n}{sup}|{S}_{k}-{S}_{n}|>\frac{\epsilon }{2}\right)+P\left(\underset{m\ge n}{sup}|{S}_{m}-{S}_{n}|>\frac{\epsilon }{2}\right)\\ \le & 2\underset{N\to \mathrm{\infty }}{lim}P\left(\underset{n\le k\le N}{max}|{S}_{k}-{S}_{n}|>\frac{\epsilon }{2}\right)\\ \le & 2\underset{N\to \mathrm{\infty }}{lim}\frac{2}{{\left(\frac{\epsilon }{2}\right)}^{2}}\sum _{i=n+1}^{N}Var\left({X}_{i}\right)\\ =& \frac{16}{{\epsilon }^{2}}\sum _{i=n+1}^{\mathrm{\infty }}Var\left({X}_{i}\right)\to 0,\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty },\end{array}$

where the last inequality follows from Lemma 1.1. Thus, the sequence $\left\{{S}_{n},n\ge 1\right\}$ is a.s. Cauchy, and, therefore, we can obtain the desired result immediately. This completes the proof of the theorem. □

With the Khintchine-Kolmogorov-type convergence theorem in hand, we can get the three series theorem and the Chun-type strong law of large numbers for ψ-mixing random variables.

Theorem 2.2 (Three series theorem)

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables satisfying ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. For some $c>0$, if

$\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>c\right)<\mathrm{\infty },$
(2.2)
$\sum _{n=1}^{\mathrm{\infty }}E{X}_{n}^{\left(c\right)}\phantom{\rule{1em}{0ex}}\mathit{\text{converges}},$
(2.3)
$\sum _{n=1}^{\mathrm{\infty }}Var\left({X}_{n}^{\left(c\right)}\right)<\mathrm{\infty },$
(2.4)

then ${\sum }_{n=1}^{\mathrm{\infty }}{X}_{n}$ converges almost surely.

Proof According to (2.4) and Theorem 2.1, we have

$\sum _{n=1}^{\mathrm{\infty }}\left({X}_{n}^{\left(c\right)}-E{X}_{n}^{\left(c\right)}\right)\phantom{\rule{1em}{0ex}}\text{converges a.s.}$
(2.5)

It follows by (2.3) and (2.5) that

$\sum _{n=1}^{\mathrm{\infty }}{X}_{n}^{\left(c\right)}\phantom{\rule{1em}{0ex}}\text{converges a.s.}$
(2.6)

Obviously, (2.2) implies that

$\sum _{n=1}^{\mathrm{\infty }}P\left({X}_{n}\ne {X}_{n}^{\left(c\right)}\right)=\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>c\right)<\mathrm{\infty }.$
(2.7)

It follows by (2.7) and Borel-Cantelli lemma that

$P\left({X}_{n}\ne {X}_{n}^{\left(c\right)},\text{i.o.}\right)=0.$
(2.8)

Finally, combining (2.6) with (2.8), we can get that ${\sum }_{n=1}^{\mathrm{\infty }}{X}_{n}$ converges a.s. The proof is completed. □

Theorem 2.3 (Chung-type strong law of large numbers)

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of mean zero ψ-mixing random variables satisfying ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$, and let $\left\{{a}_{n},n\ge 1\right\}$ be a sequence of positive numbers satisfying $0<{a}_{n}↑\mathrm{\infty }$. If there exists some $p\in \left[1,2\right]$ such that

$\sum _{n=1}^{\mathrm{\infty }}\frac{E{|{X}_{n}|}^{p}}{{a}_{n}^{p}}<\mathrm{\infty },$
(2.9)

then

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{a}_{n}}\sum _{i=1}^{n}{X}_{i}=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(2.10)

Proof It follows by (2.9) that

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}\frac{Var\left({X}_{n}^{\left({a}_{n}\right)}\right)}{{a}_{n}^{2}}& \le & \sum _{n=1}^{\mathrm{\infty }}\frac{E{\left({X}_{n}^{\left({a}_{n}\right)}\right)}^{2}}{{a}_{n}^{2}}\\ =& \sum _{n=1}^{\mathrm{\infty }}\frac{E{X}_{n}^{2}I\left(|{X}_{n}|\le {a}_{n}\right)}{{a}_{n}^{2}}\\ \le & \sum _{n=1}^{\mathrm{\infty }}\frac{E{|{X}_{n}|}^{p}}{{a}_{n}^{p}}<\mathrm{\infty }.\end{array}$

Therefore, we have by Theorem 2.1 that

$\sum _{n=1}^{\mathrm{\infty }}\frac{{X}_{n}^{\left({a}_{n}\right)}-E{X}_{n}^{\left({a}_{n}\right)}}{{a}_{n}}\phantom{\rule{1em}{0ex}}\text{converges a.s.}$
(2.11)

Since $p\in \left[1,2\right]$, it follows by $E{X}_{n}=0$ that

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}\frac{|E{X}_{n}^{\left({a}_{n}\right)}|}{{a}_{n}}& =& \sum _{n=1}^{\mathrm{\infty }}\frac{|E{X}_{n}I\left(|{X}_{n}|\le {a}_{n}\right)|}{{a}_{n}}\\ =& \sum _{n=1}^{\mathrm{\infty }}\frac{|E{X}_{n}I\left(|{X}_{n}|>{a}_{n}\right)|}{{a}_{n}}\\ \le & \sum _{n=1}^{\mathrm{\infty }}\frac{E{|{X}_{n}|}^{p}}{{a}_{n}^{p}}<\mathrm{\infty },\end{array}$

which implies that

$\sum _{n=1}^{\mathrm{\infty }}\frac{E{X}_{n}^{\left({a}_{n}\right)}}{{a}_{n}}\phantom{\rule{1em}{0ex}}\text{converges}.$
(2.12)

Together with (2.11) and (2.12), we can see that

$\sum _{n=1}^{\mathrm{\infty }}\frac{{X}_{n}^{\left({a}_{n}\right)}}{{a}_{n}}\phantom{\rule{1em}{0ex}}\text{converges a.s.}$
(2.13)

By Markov’s inequality and (2.9), we have

$\sum _{n=1}^{\mathrm{\infty }}P\left({X}_{n}\ne {X}_{n}^{\left({a}_{n}\right)}\right)=\sum _{n=1}^{\mathrm{\infty }}P\left(|{X}_{n}|>{a}_{n}\right)\le \sum _{n=1}^{\mathrm{\infty }}\frac{E{|{X}_{n}|}^{p}}{{a}_{n}^{p}}<\mathrm{\infty }.$
(2.14)

Hence, the desired result (2.10) follows from (2.13), (2.14), Borel-Cantelli lemma and Kronecker’s lemma immediately. □

## 3 Strong stability for weighted sums of ψ-mixing random variables

In the previous section, we were able to get the Khintchine-Kolmogorov-type convergence theorem for ψ-mixing random variables. In this section, we will study the strong stability for weighted sums of ψ-mixing random variables by using the Khintchine-Kolmogorov-type convergence theorem.

The concept of strong stability is as follows.

Definition 3.1 A sequence $\left\{{Y}_{n},n\ge 1\right\}$ is said to be strongly stable if there exist two constant sequences $\left\{{b}_{n},n\ge 1\right\}$ and $\left\{{d}_{n},n\ge 1\right\}$ with $0<{b}_{n}↑\mathrm{\infty }$ such that

${b}_{n}^{-1}{Y}_{n}-{d}_{n}\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$

For the definition of strong stability, one can refer to Chow and Teicher [11]. Many authors have extended the strong law of large numbers for sequences of random variables to the case of triangular array of rowwise random variables and arrays of rowwise random variables. See, for example, Hu and Taylor [12], Bai and Cheng [13], Gan and Chen [14], Kuczmaszewska [15], Wu [1618], Sung [19], Wang et al. [2024], Zhou [25], Shen [26], Shen et al. [27], and so on.

Our main results are as follows.

Theorem 3.1 Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be two sequences of positive numbers with ${c}_{n}={b}_{n}/{a}_{n}$ and ${b}_{n}↑\mathrm{\infty }$. Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables, which is stochastically dominated by a random variable X. Assume that ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. Denote $N\left(x\right)=Card\left\{n:{c}_{n}\le x\right\}$, $x>0$, $1\le p\le 2$. If the following conditions are satisfied

1. (i)

$EN\left(|X|\right)<\mathrm{\infty }$,

2. (ii)

${\int }_{0}^{\mathrm{\infty }}{t}^{p-1}P\left(|X|>t\right)\left({\int }_{t}^{\mathrm{\infty }}N\left(y\right)/{y}^{p+1}\phantom{\rule{0.2em}{0ex}}dy\right)\phantom{\rule{0.2em}{0ex}}dt<\mathrm{\infty }$,

then there exist ${d}_{n}\in \mathbf{R}$, $n=1,2,\dots$ , such that

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}{X}_{i}-{d}_{n}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(3.1)

Proof Let ${S}_{n}={\sum }_{i=1}^{n}{a}_{i}{X}_{i}$, ${T}_{n}={\sum }_{i=1}^{n}{a}_{i}{X}_{i}^{\left({c}_{i}\right)}$. By Definition 1.2 and (i), we can see that

$\sum _{i=1}^{\mathrm{\infty }}P\left({X}_{i}\ne {X}_{i}^{\left({c}_{i}\right)}\right)=\sum _{i=1}^{\mathrm{\infty }}P\left(|{X}_{i}|>{c}_{i}\right)\le C\sum _{i=1}^{\mathrm{\infty }}P\left(|X|>{c}_{i}\right)\le CEN\left(|X|\right)<\mathrm{\infty }.$
(3.2)

By Borel-Cantelli lemma, for any sequence $\left\{{d}_{n},n\ge 1\right\}\subset \mathbf{R}$, the sequences $\left\{{b}_{n}^{-1}{T}_{n}-{d}_{n}\right\}$ and $\left\{{b}_{n}^{-1}{S}_{n}-{d}_{n}\right\}$ converge on the same set and to the same limit. We will show that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0$ a.s., which gives the theorem with ${d}_{n}={b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}E{X}_{i}^{\left({c}_{i}\right)}$. Note that $\left\{{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right),i\ge 1\right\}$ is a sequence of mean zero ψ-mixing random variables. It follows from ${C}_{r}$ inequality, Jensen’s inequality and Lemma 1.2 that

$\begin{array}{l}\sum _{n=1}^{\mathrm{\infty }}\frac{E{|{a}_{n}\left({X}_{n}^{\left({c}_{n}\right)}-E{X}_{n}^{\left({c}_{n}\right)}\right)|}^{p}}{{b}_{n}^{p}}\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}E\left({|{X}_{n}|}^{p}I\left(|{X}_{n}|\le {c}_{n}\right)\right)\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}\left[{c}_{n}^{p}P\left(|X|>{c}_{n}\right)+E{|X|}^{p}I\left(|X|\le {c}_{n}\right)\right]\\ \phantom{\rule{1em}{0ex}}\le C\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)+C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}{\int }_{0}^{{c}_{n}}{t}^{p-1}P\left(|X|>t\right)\phantom{\rule{0.2em}{0ex}}dt\end{array}$
(3.3)

and

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}{\int }_{0}^{{c}_{n}}{t}^{p-1}P\left(|X|>t\right)\phantom{\rule{0.2em}{0ex}}dt& \le & {\int }_{0}^{\mathrm{\infty }}{t}^{p-1}P\left(|X|>t\right)\sum _{n:{c}_{n}\ge t}{c}_{n}^{-p}\phantom{\rule{0.2em}{0ex}}dt\\ \le & C{\int }_{0}^{\mathrm{\infty }}{t}^{p-1}P\left(|X|>t\right)\left({\int }_{t}^{\mathrm{\infty }}N\left(y\right)/{y}^{p+1}\phantom{\rule{0.2em}{0ex}}dy\right)\phantom{\rule{0.2em}{0ex}}dt.\end{array}$
(3.4)

The last inequality above follows from the fact that

$\begin{array}{rcl}\sum _{n:{c}_{n}\ge t}{c}_{n}^{-p}& =& \underset{u\to \mathrm{\infty }}{lim}\sum _{n:t\le {c}_{n}\le u}{c}_{n}^{-p}\\ =& \underset{u\to \mathrm{\infty }}{lim}{\int }_{t}^{u}{y}^{-p}\phantom{\rule{0.2em}{0ex}}dN\left(y\right)\\ =& \underset{u\to \mathrm{\infty }}{lim}\left({u}^{-p}N\left(u\right)-{t}^{-p}N\left(t\right)+p{\int }_{t}^{u}{y}^{-\left(p+1\right)}N\left(y\right)\phantom{\rule{0.2em}{0ex}}dy\right)\end{array}$

and

Obviously,

$\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)\le EN\left(|X|\right)<\mathrm{\infty }.$
(3.5)

Thus, by (3.3)-(3.5) and condition (ii), we can see that

$\sum _{n=1}^{\mathrm{\infty }}\frac{E{|{a}_{n}\left({X}_{n}^{\left({c}_{n}\right)}-E{X}_{n}^{\left({c}_{n}\right)}\right)|}^{p}}{{b}_{n}^{p}}<\mathrm{\infty }.$
(3.6)

Therefore,

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.},$

following from (3.6), Theorem 2.3 and Kronecker’s lemma immediately. The desired result is obtained. □

Corollary 3.1 Let the conditions of Theorem  3.1 be satisfied, and let $E{X}_{n}=0$ for $n\ge 1$. Assume that ${\int }_{1}^{\mathrm{\infty }}EN\left(|X|/s\right)\phantom{\rule{0.2em}{0ex}}ds<\mathrm{\infty }$. Then ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}\to 0$ a.s.

Proof By Theorem 3.1, we only need to prove that

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}E{X}_{i}^{\left({c}_{i}\right)}\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.7)

In fact,

$\begin{array}{rcl}\sum _{i=1}^{\mathrm{\infty }}\frac{{a}_{i}|E{X}_{i}^{\left({c}_{i}\right)}|}{{b}_{i}}& =& \sum _{i=1}^{\mathrm{\infty }}{c}_{i}^{-1}|E{X}_{i}I\left(|{X}_{i}|\le {c}_{i}\right)|\le \sum _{i=1}^{\mathrm{\infty }}{c}_{i}^{-1}E|{X}_{i}|I\left(|{X}_{i}|>{c}_{i}\right)\\ \le & \sum _{i=1}^{\mathrm{\infty }}{c}_{i}^{-1}\left({c}_{i}P\left(|{X}_{i}|>{c}_{i}\right)+{\int }_{{c}_{i}}^{\mathrm{\infty }}P\left(|{X}_{i}|>t\right)\phantom{\rule{0.2em}{0ex}}dt\right)\\ \le & CEN\left(|X|\right)+C{\int }_{1}^{\mathrm{\infty }}EN\left(|X|/s\right)\phantom{\rule{0.2em}{0ex}}ds<\mathrm{\infty },\end{array}$

which implies (3.7) by Kronecker’s lemma. We complete the proof of the corollary. □

Theorem 3.2 Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be two sequences of positive numbers with ${c}_{n}={b}_{n}/{a}_{n}$ and ${b}_{n}↑\mathrm{\infty }$. Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of mean zero ψ-mixing random variables, which is stochastically dominated by a random variable X. Assume that ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. Denote $N\left(x\right)=Card\left\{n:{c}_{n}\le x\right\}$, $x>0$, $1\le p\le 2$. If the following conditions are satisfied

1. (i)

$EN\left(|X|\right)<\mathrm{\infty }$,

2. (ii)

${\int }_{1}^{\mathrm{\infty }}EN\left(|X|/s\right)\phantom{\rule{0.2em}{0ex}}ds<\mathrm{\infty }$,

3. (iii)

${max}_{1\le j\le n}{c}_{j}^{p}{\sum }_{i=n}^{\mathrm{\infty }}{c}_{i}^{-p}=O\left(n\right)$,

then

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}{X}_{i}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(3.8)

Proof By condition (i) and (3.2), we only need to prove that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}^{\left({c}_{i}\right)}\to 0$ a.s. For this purpose, it suffices to show that

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.9)

and

(3.10)

Equation (3.10) follows from the proof of Corollary 3.1 immediately.

To prove (3.9), we set ${\epsilon }_{0}=0$ and ${\epsilon }_{n}={max}_{1\le j\le n}{c}_{j}$ for $n\ge 1$. It follows from ${C}_{r}$ inequality, Jensen’s inequality and Lemma 1.2 that

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}\frac{E{|{a}_{n}\left({X}_{n}^{\left({c}_{n}\right)}-E{X}_{n}^{\left({c}_{n}\right)}\right)|}^{p}}{{b}_{n}^{p}}& \le & C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}E\left({|{X}_{n}|}^{p}I\left(|{X}_{n}|\le {c}_{n}\right)\right)\\ \le & C\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)+C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}E{|X|}^{p}I\left(|X|\le {c}_{n}\right).\end{array}$

Obviously,

$\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)\le EN\left(|X|\right)<\mathrm{\infty }$
(3.11)

and

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}E{|X|}^{p}I\left(|X|\le {c}_{n}\right)& \le & \sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-p}E{|X|}^{p}I\left(|X|\le {\epsilon }_{n}\right)\\ \le & \sum _{j=1}^{\mathrm{\infty }}{\epsilon }_{j}^{p}P\left({\epsilon }_{j-1}<|X|\le {\epsilon }_{j}\right)\sum _{n=j}^{\mathrm{\infty }}{c}_{n}^{-p}\le C\sum _{j=1}^{\mathrm{\infty }}P\left(|X|>{\epsilon }_{j-1}\right)\\ \le & C\left(1+\sum _{n=1}^{\mathrm{\infty }}P\left(|X|>{c}_{n}\right)\right)\le C\left(1+EN\left(|X|\right)\right)<\mathrm{\infty }.\end{array}$

Therefore,

$\sum _{n=1}^{\mathrm{\infty }}\frac{E{|{a}_{n}\left({X}_{n}^{\left({c}_{n}\right)}-E{X}_{n}^{\left({c}_{n}\right)}\right)|}^{p}}{{b}_{n}^{p}}<\mathrm{\infty },$
(3.12)

following from the statements above. By Theorem 2.3 and Kronecker’s lemma, we can obtain (3.9) immediately. The proof is completed. □

Theorem 3.3 Let $\left\{{a}_{n},n\ge 1\right\}$ and $\left\{{b}_{n},n\ge 1\right\}$ be two sequences of positive numbers with ${c}_{n}={b}_{n}/{a}_{n}$ and ${b}_{n}↑\mathrm{\infty }$. Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables, which is stochastically dominated by a random variable X. Assume that ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. Define $N\left(x\right)=Card\left\{n:{c}_{n}\le x\right\}$, $R\left(x\right)={\int }_{x}^{\mathrm{\infty }}N\left(y\right){y}^{-3}\phantom{\rule{0.2em}{0ex}}dy$, $x>0$. If the following conditions are satisfied

1. (i)

$N\left(x\right)<\mathrm{\infty }$ for any $x>0$,

2. (ii)

$R\left(1\right)={\int }_{1}^{\mathrm{\infty }}N\left(y\right){y}^{-3}\phantom{\rule{0.2em}{0ex}}dy<\mathrm{\infty }$,

3. (iii)

$E{X}^{2}R\left(|X|\right)<\mathrm{\infty }$,

then there exist ${d}_{n}\in \mathbf{R}$, $n=1,2,\dots$ , such that

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}{X}_{i}-{d}_{n}\to 0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}$
(3.13)

Proof Since $N\left(x\right)$ is nondecreasing, then for any $x>0$

$R\left(x\right)\ge N\left(x\right){\int }_{x}^{\mathrm{\infty }}{y}^{-3}\phantom{\rule{0.2em}{0ex}}dy=\frac{1}{2}{x}^{-2}N\left(x\right),$
(3.14)

which implies that $EN\left(|X|\right)\le 2E{X}^{2}R\left(|X|\right)<\mathrm{\infty }$. Therefore,

$\begin{array}{rl}\sum _{i=1}^{\mathrm{\infty }}P\left({X}_{i}\ne {X}_{i}^{\left({c}_{i}\right)}\right)& =\sum _{i=1}^{\mathrm{\infty }}P\left(|{X}_{i}|>{c}_{i}\right)\\ \le C\sum _{i=1}^{\mathrm{\infty }}P\left(|X|>{c}_{i}\right)\le CEN\left(|X|\right)<\mathrm{\infty }.\end{array}$
(3.15)

By Borel-Cantelli lemma for any sequence $\left\{{d}_{n},n\ge 1\right\}\subset \mathbf{R}$, $\left\{{b}_{n}^{-1}{S}_{n}-{d}_{n}\right\}$ and $\left\{{b}_{n}^{-1}{T}_{n}-{d}_{n}\right\}$ converge on the same set and to the same limit. We will show that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0$ a.s., which gives the theorem with ${d}_{n}={b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}E{X}_{i}^{\left({c}_{i}\right)}$. It follows from Lemma 1.2 that

$\begin{array}{rl}\sum _{n=1}^{\mathrm{\infty }}\frac{Var\left({a}_{n}{X}_{n}^{\left({c}_{n}\right)}\right)}{{b}_{n}^{2}}& \le \sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}E{\left({X}_{n}^{\left({c}_{n}\right)}\right)}^{2}=\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}E{X}_{n}^{2}I\left(|{X}_{n}|\le {c}_{n}\right)\\ \le CEN\left(|X|\right)+C\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}E{X}^{2}I\left(|X|\le {c}_{n}\right)\end{array}$
(3.16)

and

$\begin{array}{rcl}\sum _{n=1}^{\mathrm{\infty }}{c}_{n}^{-2}E{X}^{2}I\left(|X|\le {c}_{n}\right)& =& \sum _{n:{c}_{n}\le 1}{c}_{n}^{-2}E{X}^{2}I\left(|X|\le {c}_{n}\right)+\sum _{n:{c}_{n}>1}{c}_{n}^{-2}E{X}^{2}I\left(|X|\le {c}_{n}\right)\\ \doteq & {I}_{1}+{I}_{2}.\end{array}$
(3.17)

Since $N\left(1\right)=Card\left\{n:{c}_{n}\le 1\right\}\le 2R\left(1\right)<\mathrm{\infty }$ from (3.14) and (ii), it follows that ${I}_{1}<\mathrm{\infty }$. For ${I}_{2}$, we have

$\begin{array}{c}{I}_{2}=\sum _{n:{c}_{n}>1}{c}_{n}^{-2}E{X}^{2}I\left(|X|\le {c}_{n}\right)=\sum _{k=2}^{\mathrm{\infty }}\sum _{k-1<{c}_{n}\le k}{c}_{n}^{-2}E{X}^{2}I\left(|X|\le {c}_{n}\right)\hfill \\ \phantom{{I}_{2}}\le \sum _{k=2}^{\mathrm{\infty }}\left(N\left(k\right)-N\left(k-1\right)\right){\left(k-1\right)}^{-2}E{X}^{2}I\left(|X|\le 1\right)\hfill \\ \phantom{{I}_{2}\le }+\sum _{k=2}^{\mathrm{\infty }}\left(N\left(k\right)-N\left(k-1\right)\right){\left(k-1\right)}^{-2}E{X}^{2}I\left(1<|X|\le k\right)\hfill \\ \phantom{{I}_{2}}\doteq {I}_{21}+{I}_{22},\hfill \\ {I}_{21}\le C\sum _{k=2}^{\mathrm{\infty }}\left(N\left(k\right)-N\left(k-1\right)\right)\sum _{j=k-1}^{\mathrm{\infty }}{j}^{-3}=C\sum _{j=1}^{\mathrm{\infty }}{j}^{-3}\sum _{k=2}^{j+1}\left(N\left(k\right)-N\left(k-1\right)\right)\hfill \\ \phantom{{I}_{21}}\le C\sum _{j=1}^{\mathrm{\infty }}{\left(j+1\right)}^{-3}N\left(j+1\right)\le C{\int }_{1}^{\mathrm{\infty }}{y}^{-3}N\left(y\right)\phantom{\rule{0.2em}{0ex}}dy<\mathrm{\infty }.\hfill \end{array}$

Since $N\left(x\right)$ is nondecreasing and $R\left(x\right)$ is nonincreasing, we have

$\begin{array}{rcl}{I}_{22}& \le & \sum _{m=2}^{\mathrm{\infty }}E{X}^{2}I\left(m-1<|X|\le m\right)\sum _{k=m}^{\mathrm{\infty }}N\left(k\right)\left({\left(k-1\right)}^{-2}-{k}^{-2}\right)\\ \le & C\sum _{m=2}^{\mathrm{\infty }}E{X}^{2}I\left(m-1<|X|\le m\right)\sum _{k=m}^{\mathrm{\infty }}{\int }_{k}^{k+1}N\left(x\right){x}^{-3}\phantom{\rule{0.2em}{0ex}}dx\\ \le & C\sum _{m=2}^{\mathrm{\infty }}E{X}^{2}R\left(|X|\right)I\left(m-1<|X|\le m\right)\le CE{X}^{2}R\left(|X|\right)<\mathrm{\infty }.\end{array}$

Therefore,

$\sum _{n=1}^{\mathrm{\infty }}\frac{Var\left({a}_{n}{X}_{n}^{\left({c}_{n}\right)}\right)}{{b}_{n}^{2}}<\mathrm{\infty }$
(3.18)

following from the above statements. By Theorem 2.1 and Kronecker’s lemma, we have

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.19)

Taking ${d}_{n}={b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}E{X}_{i}^{\left({c}_{i}\right)}$, we have ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}^{\left({c}_{i}\right)}-{d}_{n}\to 0$ a.s. The proof is completed. □

Corollary 3.2 Let the conditions of Theorem  3.3 be satisfied. If $E{X}_{n}=0$, $n\ge 1$ and ${\int }_{1}^{\mathrm{\infty }}EN\left(|X|/s\right)\phantom{\rule{0.2em}{0ex}}ds<\mathrm{\infty }$, then ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}\to 0$ a.s.

In the following, we denote $\alpha \left(x\right):{\mathbf{R}}_{+}\to {\mathbf{R}}_{+}$ as a positive and nonincreasing function with ${a}_{n}=\alpha \left(n\right)$, ${b}_{n}={\sum }_{i=1}^{n}{a}_{i}$, ${c}_{n}={b}_{n}/{a}_{n}$, $n\ge 1$, where

$0<{b}_{n}↑\mathrm{\infty },$
(3.20)
$0<\underset{n\to \mathrm{\infty }}{lim inf}{n}^{-1}{c}_{n}\alpha \left(log{c}_{n}\right)\le \underset{n\to \mathrm{\infty }}{lim sup}{n}^{-1}{c}_{n}\alpha \left(log{c}_{n}\right)<\mathrm{\infty },$
(3.21)
(3.22)

Theorem 3.4 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of identically distributed ψ-mixing random variables with ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. If $E|{X}_{1}|\alpha \left({log}^{+}|{X}_{1}|\right)<\mathrm{\infty }$, then there exist ${d}_{n}\in \mathbf{R}$, $n=1,2,\dots$ , such that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}-{d}_{n}\to 0$ a.s.

Proof Since $\alpha \left(x\right)$ is positive and nonincreasing for $x>0$ and $0<{b}_{n}↑\mathrm{\infty }$, it follows that ${c}_{n}↑\mathrm{\infty }$. By (3.21), we can choose constants $m\in \mathbf{N}$, ${C}_{1}>0$, ${C}_{2}>0$ such that for $n\ge m$,

${C}_{1}n\le {c}_{n}\alpha \left(log{c}_{n}\right)\le {C}_{2}n.$
(3.23)

Therefore, for $n\ge m$, we have $\frac{1}{{c}_{n}}\le \frac{\alpha \left(log{c}_{m}\right)}{{C}_{1}n}$, which implies that

$\sum _{j=m}^{\mathrm{\infty }}{c}_{j}^{-2}\le \sum _{j=m}^{\mathrm{\infty }}\frac{{\alpha }^{2}\left(log{c}_{m}\right)}{{C}_{1}^{2}{j}^{2}}\le \frac{{\alpha }^{2}\left(log{c}_{m}\right)}{{C}_{1}^{2}m}.$
(3.24)

By (3.22)-(3.24), it follows that

$\begin{array}{rcl}\sum _{j=m}^{\mathrm{\infty }}\frac{E{\left({a}_{j}{X}_{j}^{\left({c}_{j}\right)}\right)}^{2}}{{b}_{j}^{2}}& \le & \sum _{j=m}^{\mathrm{\infty }}{c}_{j}^{-2}\left({\int }_{\left\{|{X}_{1}|\le {c}_{m-1}\right\}}{X}_{1}^{2}\phantom{\rule{0.2em}{0ex}}dP+\sum _{i=m}^{j}{\int }_{\left\{{c}_{i-1}<|{X}_{1}|\le {c}_{i}\right\}}{X}_{1}^{2}\phantom{\rule{0.2em}{0ex}}dP\right)\\ \le & C+\sum _{j=m}^{\mathrm{\infty }}{c}_{j}^{-2}\sum _{i=m}^{j}{\int }_{\left\{{c}_{i-1}<|{X}_{1}|\le {c}_{i}\right\}}{X}_{1}^{2}\phantom{\rule{0.2em}{0ex}}dP\\ \le & C+C\sum _{i=m}^{\mathrm{\infty }}{i}^{-1}{\alpha }^{2}\left(log{c}_{i}\right){\int }_{\left\{{c}_{i-1}<|{X}_{1}|\le {c}_{i}\right\}}{X}_{1}^{2}\phantom{\rule{0.2em}{0ex}}dP\\ \le & C+C\sum _{i=m}^{\mathrm{\infty }}\alpha \left(log{c}_{i}\right){\int }_{\left\{{c}_{i-1}<|{X}_{1}|\le {c}_{i}\right\}}|{X}_{1}|\phantom{\rule{0.2em}{0ex}}dP\\ \le & C+C\sum _{i=m}^{\mathrm{\infty }}{\int }_{\left\{{c}_{i-1}<|{X}_{1}|\le {c}_{i}\right\}}|{X}_{1}|\alpha \left({log}^{+}|{X}_{1}|\right)\phantom{\rule{0.2em}{0ex}}dP<\mathrm{\infty }.\end{array}$

Therefore,

$\sum _{j=1}^{\mathrm{\infty }}\frac{Var\left({a}_{j}{X}_{j}^{\left({c}_{j}\right)}\right)}{{b}_{j}^{2}}\le \sum _{j=1}^{\mathrm{\infty }}\frac{E{\left({a}_{j}{X}_{j}^{\left({c}_{j}\right)}\right)}^{2}}{{b}_{j}^{2}}<\mathrm{\infty },$
(3.25)

which implies that

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.26)

from Theorem 2.1 and Kronecker’s lemma. By (3.22) and (3.23) again, we have

$\begin{array}{c}\sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{j}|>{c}_{j}\right)\le \sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{j}|\alpha \left({log}^{+}|{X}_{j}|\right)\ge {c}_{j}\alpha \left(log{c}_{j}\right)\right)\hfill \\ \phantom{\sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{j}|>{c}_{j}\right)}\le \sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{1}|\alpha \left({log}^{+}|{X}_{1}|\right)\ge {C}_{1}j\right)<\mathrm{\infty },\hfill \\ \sum _{j=1}^{\mathrm{\infty }}P\left({X}_{j}\ne {X}_{j}^{\left({c}_{j}\right)}\right)=\sum _{j=1}^{\mathrm{\infty }}P\left(|{X}_{j}|>{c}_{j}\right)=\sum _{j=1}^{m-1}P\left(|{X}_{j}|>{c}_{j}\right)+\sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{j}|>{c}_{j}\right)<\mathrm{\infty }.\hfill \end{array}$

By Borel-Cantelli lemma, we have $P\left({X}_{j}\ne {X}_{j}^{\left({c}_{j}\right)},\text{i.o.}\right)=0$. Together with (3.26), we can see that

${b}_{n}^{-1}\sum _{i=1}^{n}{a}_{i}\left({X}_{i}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(3.27)

Taking ${d}_{n}={b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}E{X}_{i}^{\left({c}_{i}\right)}$ for $n\ge 1$, we get the desired result. □

Theorem 3.5 Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of ψ-mixing random variables with ${\sum }_{n=1}^{\mathrm{\infty }}\psi \left(n\right)<\mathrm{\infty }$. If for some $1\le p\le 2$,

$\sum _{n=1}^{\mathrm{\infty }}{n}^{-p}E{|{X}_{n}\alpha \left({log}^{+}|{X}_{n}|\right)|}^{p}<\mathrm{\infty },$

then there exist ${d}_{n}\in \mathbf{R}$, $n=1,2,\dots$ , such that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}-{d}_{n}\to 0$ a.s.

Proof Similar to the proof of Theorem 3.4, it is easily seen that

$\begin{array}{rcl}\sum _{j=1}^{\mathrm{\infty }}P\left({X}_{j}\ne {X}_{j}^{\left({c}_{j}\right)}\right)& \le & m-1+\sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{j}|\alpha \left({log}^{+}|{X}_{j}|\right)\ge {c}_{j}\alpha \left(log{c}_{j}\right)\right)\\ \le & m-1+\sum _{j=m}^{\mathrm{\infty }}P\left(|{X}_{j}|\alpha \left({log}^{+}|{X}_{j}|\right)\ge {C}_{1}j\right)<\mathrm{\infty }.\end{array}$

By Borel-Cantelli lemma for any sequence $\left\{{d}_{n},n\ge 1\right\}\subset \mathbf{R}$, the sequences $\left\{{b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}-{d}_{n}\right\}$ and $\left\{{b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}^{\left({c}_{i}\right)}-{d}_{n}\right\}$ converge on the same set and to the same limit. We will show that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0$ a.s., which gives the theorem with ${d}_{n}={b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}E{X}_{i}^{\left({c}_{i}\right)}$. Note that $\left\{{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)/{b}_{i},i\ge 1\right\}$ is a sequence of mean zero ψ-mixing random variables. By ${C}_{r}$ inequality and Jensen’s inequality, we can see that

$\begin{array}{rcl}\sum _{j=1}^{\mathrm{\infty }}\frac{E|{a}_{j}\left({X}_{j}^{\left({c}_{j}\right)}-E{X}_{j}^{\left({c}_{j}\right)}\right){|}^{p}}{{b}_{j}^{p}}& \le & C\left(m-1\right)+C\sum _{j=m}^{\mathrm{\infty }}{c}_{j}^{-p}E|{X}_{j}{|}^{p}I\left(|{X}_{j}|\le {c}_{j}\right)\\ \le & C\left(m-1\right)+C\sum _{j=m}^{\mathrm{\infty }}{j}^{-p}{\left(\alpha \left(log{c}_{j}\right)\right)}^{p}E|{X}_{j}{|}^{p}I\left(|{X}_{j}|\le {c}_{j}\right)\\ \le & C\left(m-1\right)+C\sum _{j=1}^{\mathrm{\infty }}{j}^{-p}E|{X}_{j}\alpha \left({log}^{+}|{X}_{j}|\right){|}^{p}<\mathrm{\infty }.\end{array}$

It follows by Theorem 2.3 that ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}\left({X}_{i}^{\left({c}_{i}\right)}-E{X}_{i}^{\left({c}_{i}\right)}\right)\to 0$ a.s. The proof is completed. □

Corollary 3.3 Let the conditions of Theorem  3.5 be satisfied. Furthermore, suppose that $E{X}_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\int }_{1}^{\mathrm{\infty }}P\left(|{X}_{n}|>s{c}_{n}\right)\phantom{\rule{0.2em}{0ex}}ds<\mathrm{\infty }$, then ${b}_{n}^{-1}{\sum }_{i=1}^{n}{a}_{i}{X}_{i}\to 0$ a.s.

## References

1. Blum JR, Hanson DL, Koopmans L: On the strong law of large numbers for a class of stochastic process. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1963, 2: 1–11. 10.1007/BF00535293

2. Yang SC: Almost sure convergence of weighted sums of mixing sequences. J. Syst. Sci. Math. Sci. 1995, 15(3):254–265.

3. Wu QY: Strong consistency of M estimator in linear model for ρ -mixing, φ -mixing, ψ -mixing samples. Math. Appl. 2004, 17(3):393–397.

4. Wang XJ, Hu SH, Shen Y, Yang WZ: Maximal inequality for ψ -mixing sequences and its applications. Appl. Math. Lett. 2010, 23: 1156–1161. 10.1016/j.aml.2010.04.010

5. Zhu YC, Deng X, Pan J, Ling JM, Wang XJ: Strong law of large numbers for sequences of ψ -mixing random variables. J. Math. Study 2012, 45(4):404–410.

6. Pan J, Zhu YC, Zou WY, Wang XJ: Some convergence results for sequences of ψ -mixing random variables. Chin. Q. J. Math. 2013, 28(1):111–117.

7. Jamison B, Orey S, Pruitt W: Convergence of weighted averages of independent random variables. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1965, 4: 40–44. 10.1007/BF00535481

8. Wang XJ, Hu SH, Yang WZ, Wang XH: On complete convergence of weighted sums for arrays of rowwise asymptotically almost negatively associated random variables. Abstr. Appl. Anal. 2012., 2012: Article ID 315138

9. Tang XF: Some strong laws of large numbers for weighted sums of asymptotically almost negatively associated random variables. J. Inequal. Appl. 2013., 2013:

10. Shen AT, Wu RC: Strong and weak convergence for asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2013., 2013:

11. Chow YS, Teicher H: Probability Theory: Independence, Interchangeability, Martingales. 2nd edition. Springer, New York; 1988:124.

12. Hu TC, Taylor RL: On the strong law for arrays and for the bootstrap mean and variance. Int. J. Math. Math. Sci. 1997, 20(2):375–382. 10.1155/S0161171297000483

13. Bai ZD, Cheng PE: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 2000, 46: 105–112. 10.1016/S0167-7152(99)00093-0

14. Gan SX, Chen PY: On the limiting behavior of the maximum partial sums for arrays of rowwise NA random variables. Acta Math. Sci., Ser. B 2007, 27(2):283–290.

15. Kuczmaszewska A:On Chung-Teicher type strong law of large numbers for ${\rho }^{\ast }$-mixing random variables. Discrete Dyn. Nat. Soc. 2008., 2008:

16. Wu QY: Complete convergence for negatively dependent sequences of random variables. J. Inequal. Appl. 2010., 2010: Article ID 507293

17. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J. Inequal. Appl. 2010., 2010:

18. Wu QY: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012., 2012:

19. Sung SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 2011, 52: 447–454. 10.1007/s00362-009-0241-9

20. Wang XJ, Li XQ, Hu SH, Yang WZ: Strong limit theorems for weighted sums of negatively associated random variables. Stoch. Anal. Appl. 2011, 29: 1–14.

21. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2011., 2011:

22. Wang XJ, Hu SH, Volodin AI: Strong limit theorems for weighted sums of NOD sequence and exponential inequalities. Bull. Korean Math. Soc. 2011, 48(5):923–938. 10.4134/BKMS.2011.48.5.923

23. Wang XJ, Li XQ, Yang WZ, Hu SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 2012, 25: 1916–1920. 10.1016/j.aml.2012.02.069

24. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise negatively orthant dependent random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 2012, 106: 235–245. 10.1007/s13398-011-0048-0

25. Zhou XC, Tan CC, Lin JG:On the strong laws for weighted sums of ${\rho }^{\ast }$-mixing random variables. J. Inequal. Appl. 2011., 2011:

26. Shen AT: Some strong limit theorems for arrays of rowwise negatively orthant-dependent random variables. J. Inequal. Appl. 2011., 2011:

27. Shen AT, Wu RC, Chen Y, Zhou Y: Complete convergence of the maximum partial sums for arrays of rowwise of AANA random variables. Discrete Dyn. Nat. Soc. 2013., 2013:

## Acknowledgements

The authors are most grateful to the editor Andrei Volodin and an anonymous referee for careful reading of the manuscript and valuable suggestions, which helped in improving an earlier version of this paper. This work was supported by the Natural Science Project of Department of Education of Anhui Province (KJ2011z056) and the National Natural Science Foundation of China (11201001).

## Author information

Authors

### Corresponding author

Correspondence to Ling Tang.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Xu, H., Tang, L. Strong convergence properties for ψ-mixing random variables. J Inequal Appl 2013, 360 (2013). https://doi.org/10.1186/1029-242X-2013-360