# A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law

## Abstract

Let X, X1, X2,... be a sequence of independent and identically distributed random variables in the domain of attraction of a normal distribution. A universal result in almost sure limit theorem for the self-normalized partial sums S n /V n is established, where ${S}_{n}={\sum }_{i=1}^{n}{X}_{i},{V}_{n}^{2}={\sum }_{i=1}^{n}{X}_{i}^{2}$.

Mathematical Scientific Classification: 60F15.

## 1. Introduction

Throughout this article, we assume {X, X n }n is a sequence of independent and identically distributed (i.i.d.) random variables with a non-degenerate distribution function F. For each n ≥ 1, the symbol S n /V n denotes self-normalized partial sums, where ${S}_{n}={\sum }_{i=1}^{n}{X}_{1},{V}_{n}^{2}={\sum }_{i=1}^{n}{X}_{i}^{2}$. We say that the random variable X belongs to the domain of attraction of the normal law, if there exist constants a n > 0, b n such that

$\frac{{S}_{n}-{b}_{n}}{{a}_{n}}\stackrel{d}{\to }\mathcal{N},$
(1)

where $\mathcal{N}$ is the standard normal random variable. We say that {X n }nsatisfies the central limit theorem (CLT).

It is known that (1) holds if and only if

$\underset{x\to \infty }{\text{lim}}\frac{{x}^{2}ℙ\left(|X|>x\right)}{\mathbb{E}{X}^{2}I\left(|X|\le x\right)}=0.$
(2)

In contrast to the well-known classical central limit theorem, Gine et al. [1] obtained the following self-normalized version of the central limit theorem: $\left({S}_{n}-\mathbb{E}{S}_{n}\right)/{V}_{n}\stackrel{d}{\to }\mathcal{N}$ as n → ∞ if and only if (2) holds.

Brosamler [2] and Schatte [3] obtained the following almost sure central limit theorem (ASCLT): Let {X n }nbe i.i.d. random variables with mean 0, variance σ2 > 0 and partial sums S n . Then

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left\{\frac{{S}_{k}}{\sigma \sqrt{k}}
(3)

with d k = 1/k and ${D}_{n}={\sum }_{k=1}^{n}{d}_{k}$, where I denotes an indicator function, and Φ(x) is the standard normal distribution function. Some ASCLT results for partial sums were obtained by Lacey and Philipp [4], Ibragimov and Lifshits [5], Miao [6], Berkes and Csáki [7], Hörmann [8], Wu [9, 10], and Ye and Wu [11]. Huang and Zhang [12] and Zhang and Yang [13] obtained ASCLT results for self-normalized version.

Under mild moment conditions ASCLT follows from the ordinary CLT, but in general the validity of ASCLT is a delicate question of a totally different character as CLT. The difference between CLT and ASCLT lies in the weight in ASCLT.

The terminology of summation procedures (see, e.g., Chandrasekharan and Minakshisundaram [[14], p. 35]) shows that the large the weight sequence {d k ; k ≥ 1} in (3) is, the stronger the relation becomes. By this argument, one should also expect to get stronger results if we use larger weights. And it would be of considerable interest to determine the optimal weights.

On the other hand, by the Theorem 1 of Schatte [3], Equation (3) fails for weight d k = 1. The optimal weight sequence remains unknown.

The purpose of this article is to study and establish the ASCLT for self-normalized partial sums of random variables in the domain of attraction of the normal law, we will show that the ASCLT holds under a fairly general growth condition on d k = k-1 exp(ln k)α), 0 ≤ α < 1/2.

Our theorem is formulated in a more general setting.

Theorem 1.1. Let {X, X n }nbe a sequence of i.i.d. random variables in the domain of attraction of the normal law with mean zero. Suppose 0 ≤ α < 1/2 and set

${d}_{k}=\frac{\text{exp}\left({\text{ln}}^{\alpha }k\right)}{k},{D}_{n}=\sum _{k=1}^{n}{d}_{k}.$
(4)

then

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left\{\frac{{S}_{k}}{{V}_{k}}\le x\right\}=\Phi \left(x\right)\phantom{\rule{1em}{0ex}}\text{a}.\text{s}.\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}\text{any}\phantom{\rule{2.77695pt}{0ex}}x\in ℝ.$
(5)

By the terminology of summation procedures, we have the following corollary.

Corollary 1.2. Theorem 1.1 remain valid if we replace the weight sequence {d k }kby any${\left\{{d}_{k}^{*}\right\}}_{k\in ℕ}$such that$0\le {d}_{k}^{*}\le {d}_{k},{\sum }_{k=1}^{\infty }{d}_{k}^{*}=\infty$.

Remark 1.3. Our results not only give substantial improvements for weight sequence in theorem 1.1 obtained by Huang[12]but also removed the condition$nℙ\left(|{X}_{1}|>{\eta }_{n}\right)\le c{\left(\text{log}n\right)}^{{\epsilon }_{0}}$, 0 < ε0 < 1 in theorem 1.1 of[12].

Remark 1.4. If$\mathbb{E}{X}^{2}<\infty$, then X is in the domain of attraction of the normal law. Therefore, the class of random variables in Theorems 1.1 is of very broad range.

Remark 1.5. Essentially, the open problem should be whether Theorem 1.1 holds for 1/2 ≤ α < 1 remains open.

## 2. Proofs

In the following, a n ~ b n denotes limn→∞a n /b n = 1. The symbol c stands for a generic positive constant which may differ from one place to another.

Furthermore, the following three lemmas will be useful in the proof, and the first is due to [15].

Lemma 2.1. Let X be a random variable with$\mathbb{E}X=0$, and denote$l\left(x\right)=\mathbb{E}{X}^{2}I\left\{\left|X\right|\le x\right\}$. The following statements are equivalent:

1. (i)

X is in the domain of attraction of the normal law.

2. (ii)

${x}^{2}ℙ\left(\left|X\right|>x\right)=o\left(l\left(x\right)\right)$.

3. (iii)

$x\mathbb{E}\left(\left|X\right|I\left(|X|>x\right)\right)=o\left(l\left(x\right)\right)$.

4. (iv)

$\mathbb{E}\left({\left|X\right|}^{\alpha }I\left(\left|X\right|\le x\right)\right)=o\left({x}^{\alpha -2}l\left(x\right)\right)$ for α > 2.

Lemma 2.2. Let {ξ, ξ n }nbe a sequence of uniformly bounded random variables. If exist constants c > 0 and δ > 0 such that

$|\mathbb{E}{\xi }_{k}{\xi }_{j}|\le c{\left(\frac{k}{j}\right)}^{\delta },\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}1\le k
(6)

then

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}{\xi }_{k}=0\phantom{\rule{1em}{0ex}}\text{a.s.},$
(7)

where d k and D n are defined by (4).

Proof. Since

$\begin{array}{ll}\hfill \mathbb{E}{\left(\sum _{k=1}^{n}{d}_{k}{\xi }_{k}\right)}^{2}& \le \sum _{k=1}^{n}{d}_{k}^{2}\mathbb{E}{\xi }_{k}^{2}+2\sum _{1\le k
(8)

By the assumption of Lemma 2.2, there exists a constant c > 0 such that |ξ k | ≤ c for any k. Noting that ${\text{exp(ln}}^{\alpha }x\right)=\mathrm{exp}\left({\int }_{1}^{x}\frac{\alpha {\left(\mathrm{ln}u\right)}^{\alpha -1}}{u}\text{d}u\right)$, we have exp(lnαx), α < 1 is a slowly varying function at infinity. Hence,

${T}_{n1}\le c\sum _{k=1}^{n}\frac{\text{exp}\left(2{\text{ln}}^{\alpha }k\right)}{{k}^{2}}\le c\sum _{k=1}^{\infty }\frac{\text{exp}\left(2{\text{ln}}^{\alpha }k\right)}{{k}^{2}}<\infty .$

By (6),

${T}_{n2}\le c\sum _{1\le k
(9)

On the other hand, if α = 0, we have d k = e/k, D n ~ e ln n, hence, for sufficiently large n,

${T}_{n3}\le c\sum _{k=1}^{n}\frac{1}{k}\sum _{j=k}^{k{\text{ln}}^{2/\delta }{D}_{n}}\frac{1}{j}\le c{D}_{n}\text{ln}\text{ln}{D}_{n}\le \frac{{D}_{n}^{2}}{{\text{ln}}^{2}{D}_{n}}.$
(10)

If α > 0, note that

$\begin{array}{ll}\hfill {D}_{n}& ~{\int }_{1}^{n}\frac{\text{exp}\left({\text{ln}}^{\alpha }x\right)}{x}\text{d}x={\int }_{0}^{\text{ln}n}\text{exp}\left({y}^{\alpha }\right)\text{d}y\phantom{\rule{2em}{0ex}}\\ ~{\int }_{0}^{\text{ln}n}\left(\text{exp}\left({y}^{\alpha }\right)+\frac{1-\alpha }{\alpha }{y}^{-\alpha }\text{exp}\left({y}^{\alpha }\right)\right)\text{d}y\phantom{\rule{2em}{0ex}}\\ ={\int }_{0}^{\text{ln}n}\frac{1}{\alpha }{\left({y}^{1-\alpha }\text{exp}\left({y}^{\alpha }\right)\right)}^{\prime }\text{d}y\phantom{\rule{2em}{0ex}}\\ =\frac{1}{\alpha }{\text{ln}}^{1-\alpha }n\text{exp}\left({\text{ln}}^{\alpha }n\right),n\to \infty .\phantom{\rule{2em}{0ex}}\end{array}$
(11)

This implies

$\text{ln}{D}_{n}~{\text{ln}}^{\alpha }n,\phantom{\rule{1em}{0ex}}\text{exp}\left({\text{ln}}^{\alpha }n\right)~\frac{\alpha {D}_{n}}{{\left(\text{ln}{D}_{n}\right)}^{\frac{1-\alpha }{\alpha }}},\phantom{\rule{1em}{0ex}}\text{lnln}{D}_{n}~\alpha \text{lnln}n.$

Thus combining |ξ k | ≤ c for any k,

$\begin{array}{ll}\hfill {T}_{n3}& \le c\sum _{k=1}^{n}{d}_{k}\sum _{1\le k

Since α < 1/2 implies (1 - 2α)/(2α) > 0 and ε1 : = 1/(2α) - 1 > 0. Thus, for sufficiently large n, we get

${T}_{n3}\le c\frac{{D}_{n}^{2}}{{\left(\text{ln}{D}_{n}\right)}^{1/\left(2\alpha \right)}}\frac{\text{lnln}{D}_{n}}{{\left(\text{ln}{D}_{n}\right)}^{\left(1-2\alpha \right)/\left(2\alpha \right)}}\le \frac{{D}_{n}^{2}}{{\left(\text{ln}{D}_{n}\right)}^{1/\left(2\alpha \right)}}=\frac{{D}_{n}^{2}}{{\left(\text{ln}{D}_{n}\right)}^{1+{\epsilon }_{1}}}.$
(12)

Let ${T}_{n}:=\frac{1}{{D}_{n}}{\sum }_{k=1}^{n}{d}_{k}{\xi }_{k},{\epsilon }_{2}:=\text{min}\left(1,{\epsilon }_{1}\right)$. Combining (8)-(12), for sufficiently large n, we get

$\mathbb{E}{T}_{n}^{2}\le \frac{c}{{\left(\text{ln}{D}_{n}\right)}^{1+{\epsilon }_{2}}}.$

By (11), we have Dn+1~ D n . Let 0 < η < ε2/(1 + ε2), n k = inf{n; D n ≥ exp(k1-η)}, then ${D}_{{n}_{k}}\ge \text{exp}\left({k}^{1-\eta }\right),{D}_{{n}_{k}-1}<\text{exp}\left({k}^{1-\eta }\right)$. Therefore

$1\le \frac{{D}_{{n}_{k}}}{\text{exp}\left({k}^{1-\eta }\right)}~\frac{{D}_{{n}_{k}-1}}{\text{exp}\left({k}^{1-\eta }\right)}<1\to 1,$

that is,

${D}_{{n}_{k}}~\text{exp}\left({k}^{1-\eta }\right).$

Since (1 - η)(1 + ε2) > 1 from the definition of η, thus for any ε > 0, we have

$\sum _{k=1}^{\infty }ℙ\left(\left|{T}_{{n}_{k}}\right|>\epsilon \right)\le c\sum _{k=1}^{\infty }\mathbb{E}{T}_{{n}_{k}}^{2}\le c\sum _{k=1}^{\infty }\frac{1}{{k}^{\left(1-\eta \right)\left(1+{\epsilon }_{2}\right)}}<\infty .$

By the Borel-Cantelli lemma,

${T}_{{n}_{k}}\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$

Now for n k < nnk+1, by |ξ k | ≤ c for any k,

$\left|{T}_{n}\right|\le \left|{T}_{{n}_{k}}\right|+\frac{c}{{D}_{{n}_{k}}}\sum _{i={n}_{k}+1}^{{n}_{k}+1}{d}_{i}\le \left|{T}_{{n}_{k}}\right|+c\left(\frac{{D}_{{n}_{k+1}}}{{D}_{{n}_{k}}}+1\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$

from $\frac{{D}_{{n}_{k+1}}}{{D}_{{n}_{k}}}~\frac{\mathrm{exp}{\left(k+1\right)}^{1-\eta }\right)}{\mathrm{exp}\left({k}^{1-\eta }\right)}=\mathrm{exp}\left({k}^{1-\eta }\left({\left(1+1/k\right)}^{1-\eta }-1\right)\right)~\mathrm{exp}\left(\left(1-\eta \right){k}^{-\eta }\right)\to 1$. I.e., (7) holds. This completes the proof of Lemma 2.2.

Let $l\left(x\right)=\mathbb{E}{X}^{2}I\left\{\left|X\right|\le x\right\}$, b = inf{x ≥ 1; l(x) > 0} and

${\eta }_{j}=\text{inf}\left\{s;s\ge b+1,\frac{l\left(s\right)}{{s}^{2}}\le \frac{1}{j}\right\}\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}j\ge 1.$

By the definition of η j , we have $jl\left({\eta }_{j}\right)\le {\eta }_{j}^{2}$ and jl(η j - ε) > (η j - ε)2 for any ε > 0. It implies that

$nl\left({\eta }_{n}\right)~{\eta }_{n}^{2},\phantom{\rule{1em}{0ex}}\text{as}\phantom{\rule{2.77695pt}{0ex}}n\to \infty .$
(13)

For every 1 ≤ in, let

${\stackrel{̄}{X}}_{ni}={X}_{i}I\left(\left|{X}_{i}\right|\le {\eta }_{n}\right),\phantom{\rule{1em}{0ex}}{\stackrel{̄}{S}}_{n}=\sum _{i=1}^{n}{\stackrel{̄}{X}}_{ni},{\stackrel{̄}{V}}_{n}^{2}=\sum _{i=1}^{n}{\stackrel{̄}{X}}_{ni}^{2}.$

Lemma 2.3. Suppose that the assumptions of Theorem 1.1 hold. Then

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}^{n}}\sum _{k=1}^{n}{d}_{k}I\left\{\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\le x\right\}=\Phi \left(x\right)\phantom{\rule{1em}{0ex}}\text{a.s.}\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}\text{any}\phantom{\rule{1em}{0ex}}x\in ℝ,$
(14)
$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\left(I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{k}\right)\right)-\mathbb{E}I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{k}\right)\right)\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.},$
(15)
$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\left(f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)-\mathbb{E}f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.,}$
(16)

where d k and D n are defined by (4) and f is a non-negative, bounded Lipschitz function.

Proof. By the cental limit theorem for i.i.d. random variables and $\text{Var}{\stackrel{̄}{S}}_{n}~nl\left({\eta }_{n}\right)$ as n → ∞ from $\mathbb{E}X=0$, Lemma 2.1 (iii), and (13), it follows that

$\frac{{\stackrel{̄}{S}}_{n}-\mathbb{E}{\stackrel{̄}{S}}_{n}}{\sqrt{nl\left({\eta }_{n}\right)}}\stackrel{d}{\to }\mathcal{N},\phantom{\rule{1em}{0ex}}\text{as}\phantom{\rule{2.77695pt}{0ex}}n\to \infty ,$

where $\mathcal{N}$ denotes the standard normal random variable. This implies that for any g(x) which is a non-negative, bounded Lipschitz function

$\mathbb{E}g\left(\frac{{\stackrel{̄}{S}}_{n}-\mathbb{E}{\stackrel{̄}{S}}_{n}}{\sqrt{nl\left({\eta }_{n}\right)}}\right)\to \mathbb{E}g\left(\mathcal{N}\right),\phantom{\rule{1em}{0ex}}\text{as}\phantom{\rule{2.77695pt}{0ex}}n\to \infty ,$

Hence, we obtain

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right)=\mathbb{E}g\left(\mathcal{N}\right)$

from the Toeplitz lemma.

On the other hand, note that (14) is equivalent to

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right)=\mathbb{E}g\left(\mathcal{N}\right)\phantom{\rule{1em}{0ex}}\text{a.s.}$

from Theorem 7.1 of [16] and Section 2 of [17]. Hence, to prove (14), it suffices to prove

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\left(g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right)-\mathbb{E}g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right)\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.,}$
(17)

for any g(x) which is a non-negative, bounded Lipschitz function.

For any k ≥ 1, let

${\xi }_{k}=g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right)-\mathbb{E}g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right).$

For any 1 ≤ k < j, note that $g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right)$ and $g\left(\frac{{\stackrel{̄}{S}}_{j}-\mathbb{E}{\stackrel{̄}{S}}_{j}-\sum _{i=1}^{k}\left({X}_{i}-\mathbb{E}{X}_{i}\right)I\left(\left|{X}_{i}\right|\le {\eta }_{j}\right)}{\sqrt{jl\left({\eta }_{j}\right)}}\right)$ are independent and g(x) is a non-negative, bounded Lipschitz function. By the definition of η j , we get,

$\begin{array}{ll}\hfill \left|\mathbb{E}{\xi }_{k}{\xi }_{j}\right|& =\left|\text{Cov}\left(g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right),g\left(\frac{{\stackrel{̄}{S}}_{j}-\mathbb{E}{\stackrel{̄}{S}}_{j}}{\sqrt{jl\left({\eta }_{j}\right)}}\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ =\left|\text{Cov}\left(g\left(\frac{{\stackrel{̄}{S}}_{k}-\mathbb{E}{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\right),g\left(\frac{{\stackrel{̄}{S}}_{j}-\mathbb{E}{\stackrel{̄}{S}}_{j}}{\sqrt{jl\left({\eta }_{j}\right)}}\right)-g\left(\frac{{\stackrel{̄}{S}}_{j}-\mathbb{E}{\stackrel{̄}{S}}_{j}-\sum _{i=1}^{k}\left({X}_{i}-\mathbb{E}{X}_{1}\right)I\left(\left|{X}_{1}\right|\le {\eta }_{j}\right)}{\sqrt{jl\left({\eta }_{j}\right)}}\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ \le c\frac{\mathbb{E}\left(\sum _{i=1}^{k}\left({X}_{i}-\mathbb{E}{X}_{i}\right)I\left(\left|{X}_{i}\right|\le {\eta }_{j}\right)\right)}{\sqrt{jl\left({\eta }_{j}\right)}}\le c\frac{\sqrt{k\mathbb{E}{X}^{2}I\left(\left|X\right|\le {\eta }_{j}\right)}}{\sqrt{jl\left({\eta }_{j}\right)}}\phantom{\rule{2em}{0ex}}\\ =c{\left(\frac{k}{j}\right)}^{1/2}.\phantom{\rule{2em}{0ex}}\end{array}$

By Lemma 2.2, (17) holds.

Now we prove (15). Let

${Z}_{k}=I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{k}\right)\right)-\mathbb{E}I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{k}\right)\right)\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}\text{any}\phantom{\rule{1em}{0ex}}k\ge 1.$

It is known that I(A B) - I(B) ≤ I(A) for any sets A and B, then for 1 ≤ k < j, by Lemma 2.1 (ii) and (13), we get

$ℙ\left(\left|X\right|>{\eta }_{j}\right)=o\left(1\right)\frac{l\left({\eta }_{j}\right)}{{\eta }_{j}^{2}}=\frac{o\left(1\right)}{j}.$
(18)

Hence

$\begin{array}{ll}\hfill \left|\mathbb{E}{Z}_{k}{Z}_{j}\right|& =\left|\text{Cov}\left(I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{k}\right)\right),I\left(\bigcup _{i=1}^{j}\left(\left|{X}_{i}\right|>{\eta }_{j}\right)\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ =\left|\text{Cov}\left(I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{k}\right)\right),I\left(\bigcup _{i=1}^{j}\left(\left|{X}_{i}\right|>{\eta }_{j}\right)\right)-I\left(\bigcup _{i=k+1}^{j}\left(\left|{X}_{i}\right|>{\eta }_{j}\right)\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ \le \mathbb{E}\left|I\left(\bigcup _{i=1}^{j}\left(\left|{X}_{i}\right|>{\eta }_{j}\right)\right)-I\left(\bigcup _{i=k+1}^{j}\left(\left|{X}_{i}\right|>{\eta }_{j}\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ \le \mathbb{E}I\left(\bigcup _{i=1}^{k}\left(\left|{X}_{i}\right|>{\eta }_{j}\right)\right)\le kℙ\left(\left|X\right|>{\eta }_{j}\right)\phantom{\rule{2em}{0ex}}\\ \le \frac{k}{j}.\phantom{\rule{2em}{0ex}}\end{array}$

By Lemma 2.2, (15) holds.

Finally, we prove (16). Let

${\zeta }_{k}=f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)-\mathbb{E}f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}\text{any}\phantom{\rule{1em}{0ex}}k\ge 1.$

For 1 ≤ k < j,

$\begin{array}{ll}\hfill \left|\mathbb{E}{\zeta }_{k}{\zeta }_{j}\right|& =\left|\text{Cov}\left(f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right),f\left(\frac{{\stackrel{̄}{V}}_{j}^{2}}{jl\left({\eta }_{j}\right)}\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ =\left|\text{Cov}\left(f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right),f\left(\frac{{\stackrel{̄}{V}}_{j}^{2}}{jl\left({\eta }_{j}\right)}\right)-f\left(\frac{{\stackrel{̄}{V}}_{j}^{2}-\sum _{i=1}^{k}{X}_{i}^{2}I\left(\left|{X}_{i}\right|\le {\eta }_{j}\right)}{jl\left({\eta }_{j}\right)}\right)\right)\right|\phantom{\rule{2em}{0ex}}\\ \le c\frac{\mathbb{E}\left(\sum _{i=1}^{k}{X}_{i}^{2}I\left(\left|{X}_{i}\right|\le {\eta }_{j}\right)\right)}{jl\left({\eta }_{j}\right)}=c\frac{k\mathbb{E}{X}^{2}I\left(\left|X\right|\le {\eta }_{j}\right)}{jl\left({\eta }_{j}\right)}=c\frac{kl\left({\eta }_{j}\right)}{jl\left({\eta }_{j}\right)}\phantom{\rule{2em}{0ex}}\\ =c\frac{k}{j}.\phantom{\rule{2em}{0ex}}\end{array}$

By Lemma 2.2, (16) holds. This completes the proof of Lemma 2.3.

Proof of Theorem 1.1. For any given 0 < ε < 1, note that

$\begin{array}{l}I\left(\frac{{S}_{k}}{{V}_{k}}\le x\right)\le I\left(\frac{{\overline{S}}_{k}}{\sqrt{\left(1+\epsilon \right)kl\left({\eta }_{k}\right)}}\le x\right)+I\left({\overline{V}}_{k}^{2}>\left(1+\epsilon \right)kl\left({\eta }_{k}\right)\right)+I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\right),\text{ for\hspace{0.17em}}x\ge 0,\\ I\left(\frac{{S}_{k}}{{V}_{k}}\le x\right)\le I\left(\frac{{\overline{S}}_{k}}{\sqrt{\left(1-\epsilon \right)kl\left({\eta }_{k}\right)}}\le x\right)+I\left({\overline{V}}_{k}^{2}<\left(1-\epsilon \right)kl\left({\eta }_{k}\right)\right)+I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\right),\text{ for\hspace{0.17em}}x<0,\end{array}$

and

$\begin{array}{l}I\left(\frac{{S}_{k}}{{V}_{k}}\le x\right)\ge I\left(\frac{{\overline{S}}_{k}}{\sqrt{\left(1-\epsilon \right)kl\left({\eta }_{k}\right)}}\le x\right)-I\left({\overline{V}}_{k}^{2}<\left(1-\epsilon \right)kl\left({\eta }_{k}\right)\right)-I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\right),\text{ for\hspace{0.17em}}x\ge 0,\\ I\left(\frac{{S}_{k}}{{V}_{k}}\le x\right)\ge I\left(\frac{{\overline{S}}_{k}}{\sqrt{\left(1+\epsilon \right)kl\left({\eta }_{k}\right)}}\le x\right)-I\left({\overline{V}}_{k}^{2}<\left(1+\epsilon \right)kl\left({\eta }_{k}\right)\right)-I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\right),\text{ for\hspace{0.17em}}x<0.\end{array}$

Hence, to prove (5), it suffices to prove

$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left(\frac{{\stackrel{̄}{S}}_{k}}{\sqrt{kl\left({\eta }_{k}\right)}}\le \sqrt{1±\epsilon x}\right)=\Phi \left(\sqrt{1±\epsilon x}\right)\phantom{\rule{2.77695pt}{0ex}}\text{a.s.,}$
(19)
$\underset{n\to \infty }{\mathrm{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\right)=0\text{ a.s.,}$
(20)
$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left({\stackrel{̄}{V}}_{k}^{2}>\left(1+\epsilon \right)kl\left({\eta }_{k}\right)\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.,}$
(21)
$\underset{n\to \infty }{\text{lim}}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left({\stackrel{̄}{V}}_{k}^{2}>\left(1-\epsilon \right)kl\left({\eta }_{k}\right)\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.,}$
(22)

by the arbitrariness of ε > 0.

Firstly, we prove (19). Let 0 < β < 1/2 and h(·) be a real function, such that for any given x ,

$I\left(y\le \sqrt{1±\epsilon x}-\beta \right)\le h\left(y\right)\le I\left(y\le \sqrt{1±\epsilon x}+\beta \right).$
(23)

By $\mathbb{E}X=0$, Lemma 2.1 (iii) and (13), we have

$\left|\mathbb{E}{\stackrel{̄}{S}}_{k}\right|=\left|k\mathbb{E}XI\left(\left|X\right|\le {\eta }_{k}\right)\right|=\left|k\mathbb{E}XI\left(\left|X\right|>{\eta }_{k}\right)\right|\le k\mathbb{E}\left|X\right|I\left(\left|X\right|>{\eta }_{k}\right)=o\left(\sqrt{kl\left({\eta }_{k}\right)}\right).$

This, combining with (14), (23) and the arbitrariness of β in (23), (19) holds.

By (15), (18) and the Toeplitz lemma,

$\begin{array}{c}0\le \frac{1}{{D}^{n}}\sum _{k=1}^{n}{d}_{k}I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\right)~\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}I\left(\underset{i=1}{\overset{k}{\cup }}|{X}_{i}|>{\eta }_{k}\right)\\ \le \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}kℙ\left(|X|>{\eta }_{k}\right)\to 0\text{ a.s.}\end{array}$

That is (20) holds.

Now we prove (21). For any μ > 0, let f be a non-negative, bounded Lipschitz function such that

$I\left(x>1+\mu \right)\le f\left(x\right)\le I\left(x>1+\mu /2\right).$

Form $\mathbb{E}{\stackrel{̄}{V}}_{k}^{2}=kl\left({\eta }_{k}\right),{\stackrel{̄}{X}}_{ni}$ is i.i.d., Lemma 2.1 (iv), and (13),

$\begin{array}{ll}\hfill ℙ\left({\stackrel{̄}{V}}_{k}^{2}>\left(1+\frac{\mu }{2}\right)kl\left({\eta }_{k}\right)\right)& =ℙ\left({\stackrel{̄}{V}}_{k}^{2}-\mathbb{E}{\stackrel{̄}{V}}_{k}^{2}>\frac{\mu }{2}kl\left({\eta }_{k}\right)\right)\phantom{\rule{2em}{0ex}}\\ \le c\frac{\mathbb{E}{\left({\stackrel{̄}{V}}_{k}^{2}-\mathbb{E}{\stackrel{̄}{V}}_{k}^{2}\right)}^{2}}{{k}^{2}{l}^{2}\left({\eta }_{k}\right)}\le c\frac{\mathbb{E}{X}^{4}I\left(\left|X\right|\le {\eta }_{k}\right)}{k{l}^{2}\left({\eta }_{k}\right)}\phantom{\rule{2em}{0ex}}\\ =\frac{o\left(1\right){\eta }_{k}^{2}}{kl\left({\eta }_{k}\right)}=o\left(1\right)\to 0.\phantom{\rule{2em}{0ex}}\end{array}$

Therefore, from (16) and the Toeplitz lemma,

$\begin{array}{ll}\hfill 0& \le \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left({\stackrel{̄}{V}}_{k}^{2}>\left(1+\mu \right)kl\left({\eta }_{k}\right)\right)\le \frac{1}{{D}^{n}}\sum _{k=1}^{n}{d}_{k}f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)\phantom{\rule{2em}{0ex}}\\ ~\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}f\left(\frac{{\stackrel{̄}{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)\le \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}I\left({\stackrel{̄}{V}}_{k}^{2}>\left(1+\mu /2\right)kl\left({\eta }_{k}\right)\right)\phantom{\rule{2em}{0ex}}\\ =\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}ℙ\left({\stackrel{̄}{V}}_{k}^{2}>\left(1+\mu /2\right)kl\left({\eta }_{k}\right)\right)\phantom{\rule{2em}{0ex}}\\ \to 0\phantom{\rule{1em}{0ex}}\text{a.s.}\phantom{\rule{2em}{0ex}}\end{array}$

Hence, (21) holds. By similar methods used to prove (21), we can prove (22). This completes the proof of Theorem 1.1.

## References

1. Gine E, Götze F, Mason DM: When is the Student t-statistic asymptotically standard normal? Ann Probab 1997, 25: 1514–531.

2. Brosamler GA: An almost everywhere central limit theorem. Math Proc Camb Philos Soc 1988, 104: 561–574. 10.1017/S0305004100065750

3. Schatte P: On strong versions of the central limit theorem. Mathematische Nachrichten 1988, 137: 249–256. 10.1002/mana.19881370117

4. Lacey MT, Philipp W: A note on the almost sure central limit theorem. Statist Probab Lett 1990, 9: 201–205. 10.1016/0167-7152(90)90056-D

5. Ibragimov IA, Lifshits M: On the convergence of generalized moments in almost sure central limit theorem. Stat Probab Lett 1998, 40: 343–351. 10.1016/S0167-7152(98)00134-5

6. Miao Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc Indian Acad Sci C Math Sci 2008, 118(2):289–294. 10.1007/s12044-008-0021-9

7. Berkes I, Csáki E: A universal result in almost sure central limit theory. Stoch Proc Appl 2001, 94: 105–134. 10.1016/S0304-4149(01)00078-3

8. Hörmann S: Critical behavior in almost sure central limit theory. J Theoret Probab 2007, 20: 613–636. 10.1007/s10959-007-0080-3

9. Wu QY: Almost sure limit theorems for stable distribution. Stat Probab Lett 2011, 281(6):662–672.

10. Wu QY: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc Math Sci 2011, 121(3):369–377. 10.1007/s12044-011-0036-5

11. Ye DX, Wu QY: Almost sure central limit theorem of product of partial sums for strongly mixing. J Inequal Appl vol 2011, 2011: 9. Article ID 576301, 10.1186/1029-242X-2011-9

12. Huang SH, Pang TX: An almost sure central limit theorem for self-normalized partial sums. Comput Math Appl 2010, 60: 2639–2644. 10.1016/j.camwa.2010.08.093

13. Zhang Y, Yang XY: An almost sure central limit theorem for self-normalized products of sums of i.i.d. random variables. J Math Anal Appl 2011, 376: 29–41. 10.1016/j.jmaa.2010.10.021

14. Chandrasekharan K, Minakshisundaram S (Eds): Typical Means Oxford University Press, Oxford; 1952.

15. Csörgo M, Szyszkowicz B, Wang QY: Donsker's theorem for self-normalized partial processes. Ann Probab 2003, 31(3):1228–1240. 10.1214/aop/1055425777

16. Billingsley P (Ed): Convergence of Probability Measures Wiley, New York; 1968.

17. Peligrad M, Shao QM: A note on the almost sure central limit theorem for weakly dependent random variables. Stat Probab Lett 1995, 22: 131–136. 10.1016/0167-7152(94)00059-H

## Acknowledgements

The author was very grateful to the referees and the Editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. This work was supported by the National Natural Science Foundation of China (11061012), the project supported by program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011]47), and the support program of Key Laboratory of Spatial Information and Geomatics (1103108-08).

## Author information

Authors

### Corresponding author

Correspondence to Qunying Wu.

### Competing interests

The author declares that they have no competing interests.

## Rights and permissions

Reprints and Permissions

Wu, Q. A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law. J Inequal Appl 2012, 17 (2012). https://doi.org/10.1186/1029-242X-2012-17