# An improved result in almost sure central limit theorem for self-normalized products of partial sums

## Abstract

Let $X,{X}_{1},{X}_{2},\dots$ be a sequence of independent and identically distributed random variables in the domain of attraction of the normal law. A universal result in an almost sure limit theorem for the self-normalized products of partial sums is established.

MSC:60F15.

## 1 Introduction

Let ${\left\{X,{X}_{n}\right\}}_{n\in \mathbb{N}}$ be a sequence of independent and identically distributed (i.i.d.) positive random variables with a non-degenerate distribution function and $\mathbb{E}X=\mu >0$. For each $n\ge 1$, the symbol ${S}_{n}/{V}_{n}$ denotes self-normalized partial sums, where ${S}_{n}={\sum }_{i=1}^{n}{X}_{i}$, ${V}_{n}^{2}={\sum }_{i=1}^{n}{\left({X}_{i}-\mu \right)}^{2}$. We say that the random variable X belongs to the domain of attraction of the normal law if there exist constants ${a}_{n}>0$, ${b}_{n}\in \mathbb{R}$ such that

$\frac{{S}_{n}-{b}_{n}}{{a}_{n}}\stackrel{d}{⟶}\mathcal{N}.$
(1)

Here and in the sequel, $\mathcal{N}$ is a standard normal random variable, and $\stackrel{d}{⟶}$ denotes the convergence in distribution. We say that ${\left\{{X}_{n}\right\}}_{n\in \mathbb{N}}$ satisfies the central limit theorem (CLT).

It is known that (1) holds if and only if

$\underset{x\to \mathrm{\infty }}{lim}\frac{{x}^{2}\mathbb{P}\left(|X|>x\right)}{\mathbb{E}{X}^{2}I\left(|X|\le x\right)}=0.$
(2)

In contrast to the well-known classical central limit theorem, Gine et al.  obtained the following self-normalized version of the central limit theorem: $\left({S}_{n}-\mathbb{E}{S}_{n}\right)/{V}_{n}\stackrel{d}{⟶}\mathcal{N}$ as $n\to \mathrm{\infty }$ if and only if (2) holds.

The limit theorem of products ${\mathrm{\Pi }}_{j=1}^{n}{S}_{j}$ was initiated by Arnold and Villaseñor . Their result was generalized by Wu , Ye and Wu , and Rempala and Wesolowski  who proved that if $\left\{{X}_{n};n\ge 1\right\}$ is a sequence of i.i.d. positive and finite second moment random variables with $\mathbb{E}{X}_{1}=\mu$, $Var{X}_{1}={\sigma }^{2}>0$ and the coefficient of variation $\gamma =\sigma /\mu$, then

(3)

Recently Pang et al.  obtained the following self-normalized products of sums for i.i.d. sequences: Let ${\left\{X,{X}_{n}\right\}}_{n\in \mathbb{N}}$ be a sequence of i.i.d. positive random variables with $\mathbb{E}X=\mu >0$, and assume that X is in the domain of attraction of the normal law. Then

(4)

Brosamler  and Schatte  obtained the following almost sure central limit theorem (ASCLT): Let ${\left\{{X}_{n}\right\}}_{n\in \mathbb{N}}$ be i.i.d. random variables with mean 0, variance ${\sigma }^{2}>0$, and partial sums ${S}_{n}$. Then

(5)

with ${d}_{k}=1/k$ and ${D}_{n}={\sum }_{k=1}^{n}{d}_{k}$; here and in the sequel, I denotes an indicator function, and $\mathrm{\Phi }\left(x\right)$ is the standard normal distribution function. Some ASCLT results for partial sums were obtained by Lacey and Philipp , Ibragimov and Lifshits , Miao , Berkes and Csáki , Hörmann , Wu [18, 19]. Gonchigdanzan and Rempala  gave ASCLT for products of partial sums. Huang and Pang , Wu , and Zhang and Yang  obtained ASCLT results for self-normalized version.

Under mild moment conditions, ASCLT follows from the ordinary CLT, but in general, the validity of ASCLT is a delicate question of a totally different character as CLT. The difference between CLT and ASCLT lies in the weight in ASCLT.

The terminology of summation procedures (see, e.g., Chandrasekharan and Minakshisundaram , p.35) shows that the larger the weight sequence $\left\{{d}_{k};k\ge 1\right\}$ in (5) is, the stronger the relation becomes. By this argument, one should also expect to get stronger results if we use larger weights. It would be of considerable interest to determine the optimal weights.

On the other hand, by Theorem 1 of Schatte , (5) fails for weight ${d}_{k}=1$. The optimal weight sequence remains unknown.

The purpose of this paper is to study and establish the ASCLT for self-normalized products of partial sums of random variables in the domain of attraction of the normal law. We show that the ASCLT holds under a fairly general growth condition on ${d}_{k}={k}^{-1}exp\left({\left(lnk\right)}^{\alpha }\right)$, $0\le \alpha <1/2$.

In the following, we assume that ${\left\{X,{X}_{n}\right\}}_{n\in \mathbb{N}}$ is a sequence of i.i.d. positive random variables in the domain of attraction of the normal law with $\mathbb{E}X=\mu >0$. Let ${b}_{k,n}={\sum }_{j=k}^{n}1/j$, ${S}_{k}={\sum }_{i=1}^{k}{X}_{i}$, ${V}_{k}^{2}={\sum }_{i=1}^{k}{\left({X}_{i}-\mu \right)}^{2}$, ${S}_{k,k}={\sum }_{i=1}^{k}{b}_{i,k}\left({X}_{i}-\mu \right)$ for $1\le k\le n$. ${a}_{n}\sim {b}_{n}$ denotes ${lim}_{n\to \mathrm{\infty }}{a}_{n}/{b}_{n}=1$. The symbol c stands for a generic positive constant which may differ from one place to another.

Our theorem is formulated in a general setting.

Theorem 1.1 Let ${\left\{X,{X}_{n}\right\}}_{n\in \mathbb{N}}$ be a sequence of i.i.d. positive random variables in the domain of attraction of the normal law with mean $\mu >0$. Suppose $0\le \alpha <1/2$ and set

${d}_{k}=\frac{exp\left({ln}^{\alpha }k\right)}{k},\phantom{\rule{2em}{0ex}}{D}_{n}=\sum _{k=1}^{n}{d}_{k}.$
(6)

Then

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left({\left(\frac{{\prod }_{i=1}^{k}{S}_{i}}{k!{\mu }^{k}}\right)}^{\mu /{V}_{k}}\le x\right)=F\left(x\right)\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}}\phantom{\rule{0.25em}{0ex}}x\in \mathbb{R}.$
(7)

Here and in the sequel, F is the distribution function of the random variable ${\mathrm{e}}^{\sqrt{2}\mathcal{N}}$.

By the terminology of summation procedures, we have the following corollary.

Corollary 1.2 Theorem  1.1 remains valid if we replace the weight sequence ${\left\{{d}_{k}\right\}}_{k\in \mathbb{N}}$ by ${\left\{{d}_{k}^{\ast }\right\}}_{k\in \mathbb{N}}$ such that $0\le {d}_{k}^{\ast }\le {d}_{k}$, ${\sum }_{k=1}^{\mathrm{\infty }}{d}_{k}^{\ast }=\mathrm{\infty }$.

Remark 1.3 Our results give substantial improvements for weight sequence in Theorem 1.1 obtained by Zhang and Yang .

Remark 1.4 If X is in the domain of attraction of the normal law, then $\mathbb{E}{|X|}^{p}<\mathrm{\infty }$ for $0. On the contrary, if $\mathbb{E}{X}^{2}<\mathrm{\infty }$, then X is in the domain of attraction of the normal law. Therefore, the class of random variables in Theorem 1.1 is of very broad range.

Remark 1.5 Essentially, the problem whether Theorem 1.1 holds for $1/2\le \alpha <1$ remains open.

## 2 Proofs

Furthermore, the following three lemmas will be useful in the proof, and the first is due to Csörgo et al. .

Lemma 2.1 Let X be a random variable with $\mathbb{E}X=\mu$, and denote $l\left(x\right)=\mathbb{E}{\left(X-\mu \right)}^{2}I\left\{|X-\mu |\le x\right\}$. The following statements are equivalent.

1. (i)

X is in the domain of attraction of the normal law.

2. (ii)
${x}^{2}\mathbb{P}\left(|X-\mu |>x\right)=o\left(l\left(x\right)\right)$

.

3. (iii)
$x\mathbb{E}\left(|X-\mu |I\left(|X-\mu |>x\right)\right)=o\left(l\left(x\right)\right)$

.

4. (iv)
$\mathbb{E}\left({|X-\mu |}^{\alpha }I\left(|X-\mu |\le x\right)\right)=o\left({x}^{\alpha -2}l\left(x\right)\right)$

for $\alpha >2$.

5. (v)
$l\left(x\right)$

is a slowly varying function at ∞.

Lemma 2.2 Let ${\left\{\xi ,{\xi }_{n}\right\}}_{n\in \mathbb{N}}$ be a sequence of uniformly bounded random variables. If there exist constants $c>0$ and $\delta >0$ such that

$|\mathbb{E}{\xi }_{k}{\xi }_{j}|\le c{\left(\frac{k}{j}\right)}^{\delta }\phantom{\rule{1em}{0ex}}\mathit{\text{for}}\phantom{\rule{0.25em}{0ex}}1\le k
(8)

then

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}{\xi }_{k}=0\phantom{\rule{1em}{0ex}}\mathit{\text{a.s.}},$
(9)

where ${d}_{k}$ and ${D}_{n}$ are defined by (6).

Proof

Since

$\begin{array}{rcl}\mathbb{E}{\left(\sum _{k=1}^{n}{d}_{k}{\xi }_{k}\right)}^{2}& \le & \sum _{k=1}^{n}{d}_{k}^{2}\mathbb{E}{\xi }_{k}^{2}+2\sum _{1\le k
(10)

By the assumption of Lemma 2.2, there exists a constant $c>0$ such that $|{\xi }_{k}|\le c$ for any k. Noting that $exp\left({ln}^{\alpha }x\right)=exp\left({\int }_{1}^{x}\frac{\alpha {\left(lnu\right)}^{\alpha -1}}{u}\phantom{\rule{0.2em}{0ex}}\mathrm{d}u\right)$, we have that $exp\left({ln}^{\alpha }x\right),\alpha <1$, is a slowly varying function at infinity. Hence,

${T}_{n1}\le c\sum _{k=1}^{n}\frac{exp\left(2{ln}^{\alpha }k\right)}{{k}^{2}}\le c\sum _{k=1}^{\mathrm{\infty }}\frac{exp\left(2{ln}^{\alpha }k\right)}{{k}^{2}}<\mathrm{\infty }.$

By (8),

$\begin{array}{rcl}{T}_{n2}& \le & c\sum _{1\le k
(11)

On the other hand, if $\alpha =0$, we have ${d}_{k}=\mathrm{e}/k$, ${D}_{n}\sim \mathrm{e}lnn$, and hence, for sufficiently large n,

${T}_{n3}\le c\sum _{k=1}^{n}\frac{1}{k}\sum _{j=k}^{k{ln}^{2/\delta }{D}_{n}}\frac{1}{j}\le c{D}_{n}lnln{D}_{n}\le \frac{{D}_{n}^{2}}{{ln}^{2}{D}_{n}}.$
(12)

If $0<\alpha <1/2$, then by ${y}^{-\alpha }\to 0$, $y\to \mathrm{\infty }$, for arbitrary small $\epsilon >0$, there exists ${n}_{0}$ such that for $y\ge ln{n}_{0}$, $\left(1-\alpha \right){y}^{-\alpha }/\alpha <\epsilon$. Therefore

$\begin{array}{rcl}1& \le & \frac{{\int }_{0}^{lnn}\left(exp\left({y}^{\alpha }\right)+\frac{1-\alpha }{\alpha }{y}^{-\alpha }exp\left({y}^{\alpha }\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y}{{\int }_{0}^{lnn}exp\left({y}^{\alpha }\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y}\\ \le & \frac{{\int }_{0}^{ln{n}_{0}}exp\left({y}^{\alpha }\right)\left(1+\frac{1-\alpha }{\alpha }{y}^{-\alpha }\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y+\left(1+\epsilon \right){\int }_{ln{n}_{0}}^{lnn}exp\left({y}^{\alpha }\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y}{{\int }_{0}^{lnn}exp\left({y}^{\alpha }\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y}\to 1+\epsilon .\end{array}$

This implies

${\int }_{0}^{lnn}exp\left({y}^{\alpha }\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y\sim {\int }_{0}^{lnn}\left(exp\left({y}^{\alpha }\right)+\frac{1-\alpha }{\alpha }{y}^{-\alpha }exp\left({y}^{\alpha }\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y$

from the arbitrariness of ε.

Hence,

$\begin{array}{rcl}{D}_{n}& \sim & {\int }_{1}^{n}\frac{exp\left({ln}^{\alpha }x\right)}{x}\phantom{\rule{0.2em}{0ex}}\mathrm{d}x={\int }_{0}^{lnn}exp\left({y}^{\alpha }\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y\\ \sim & {\int }_{0}^{lnn}\left(exp\left({y}^{\alpha }\right)+\frac{1-\alpha }{\alpha }{y}^{-\alpha }exp\left({y}^{\alpha }\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}y\\ =& {\int }_{0}^{lnn}\frac{1}{\alpha }{\left({y}^{1-\alpha }exp\left({y}^{\alpha }\right)\right)}^{\prime }\phantom{\rule{0.2em}{0ex}}\mathrm{d}y\\ =& \frac{1}{\alpha }{ln}^{1-\alpha }nexp\left({ln}^{\alpha }n\right),\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty }.\end{array}$
(13)

This implies

$ln{D}_{n}\sim {ln}^{\alpha }n,\phantom{\rule{2em}{0ex}}exp\left({ln}^{\alpha }n\right)\sim \frac{\alpha {D}_{n}}{{\left(ln{D}_{n}\right)}^{\frac{1-\alpha }{\alpha }}},\phantom{\rule{2em}{0ex}}lnln{D}_{n}\sim \alpha lnlnn.$

Thus combining $|{\xi }_{k}|\le c$ for any k,

$\begin{array}{rcl}{T}_{n3}& \le & c\sum _{k=1}^{n}\sum _{1\le k

Since $\alpha <1/2$ implies $\left(1-2\alpha \right)/\left(2\alpha \right)>0$ and ${\epsilon }_{1}:=1/\left(2\alpha \right)-1>0$, thus for sufficiently large n, we get

${T}_{n3}\le c\frac{{D}_{n}^{2}}{{\left(ln{D}_{n}\right)}^{1/\left(2\alpha \right)}}\frac{lnln{D}_{n}}{{\left(ln{D}_{n}\right)}^{\left(1-2\alpha \right)/\left(2\alpha \right)}}\le \frac{{D}_{n}^{2}}{{\left(ln{D}_{n}\right)}^{1/\left(2\alpha \right)}}=\frac{{D}_{n}^{2}}{{\left(ln{D}_{n}\right)}^{1+{\epsilon }_{1}}}.$
(14)

Let ${T}_{n}:=\frac{1}{{D}_{n}}{\sum }_{k=1}^{n}{d}_{k}{\xi }_{k}$, ${\epsilon }_{2}:=min\left(1,{\epsilon }_{1}\right)$. Combining (10)-(12) and (14), for sufficiently large n, we get

$\mathbb{E}{T}_{n}^{2}\ll \frac{c}{{\left(ln{D}_{n}\right)}^{1+{\epsilon }_{2}}}.$

By (13), we have ${D}_{n+1}\sim {D}_{n}$. Let $0<\eta <\frac{{\epsilon }_{2}}{1+{\epsilon }_{2}}$, ${n}_{k}=inf\left\{n;{D}_{n}\ge exp\left({k}^{1-\eta }\right)\right\}$, then ${D}_{{n}_{k}}\ge exp\left({k}^{1-\eta }\right)$, ${D}_{{n}_{k}-1}. Therefore

$1\le \frac{{D}_{{n}_{k}}}{exp\left({k}^{1-\eta }\right)}\sim \frac{{D}_{{n}_{k}-1}}{exp\left({k}^{1-\eta }\right)}<1,$

that is,

${D}_{{n}_{k}}\sim exp\left({k}^{1-\eta }\right).$

Since $\left(1-\eta \right)\left(1+{\epsilon }_{2}\right)>1$ from the definition of η, thus for any $\epsilon >0$, we have

$\sum _{k=1}^{\mathrm{\infty }}P\left(|{T}_{{n}_{k}}|>\epsilon \right)\le c\sum _{k=1}^{\mathrm{\infty }}E{T}_{{n}_{k}}^{2}\le c\sum _{k=1}^{\mathrm{\infty }}\frac{1}{{k}^{\left(1-\eta \right)\left(1+{\epsilon }_{2}\right)}}<\mathrm{\infty }.$

By the Borel-Cantelli lemma,

${T}_{{n}_{k}}\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$

Now, for ${n}_{k}, by $|{\xi }_{k}|\le c$ for any k,

$|{T}_{n}|\le |{T}_{{n}_{k}}|+\frac{c}{{D}_{{n}_{k}}}\sum _{i={n}_{k}+1}^{{n}_{k+1}}{d}_{i}\le |{T}_{{n}_{k}}|+c\left(\frac{{D}_{{n}_{k+1}}}{{D}_{{n}_{k}}}-1\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$

from $\frac{{D}_{{n}_{k+1}}}{{D}_{{n}_{k}}}\sim \frac{exp\left({\left(k+1\right)}^{1-\eta }\right)}{exp\left({k}^{1-\eta }\right)}=exp\left({k}^{1-\eta }\left({\left(1+1/k\right)}^{1-\eta }-1\right)\right)\sim exp\left(\left(1-\eta \right){k}^{-\eta }\right)\to 1$, i.e., (9) holds. This completes the proof of Lemma 2.2. □

Let $l\left(x\right)=\mathbb{E}{\left(X-\mu \right)}^{2}I\left\{|X-\mu |\le x\right\}$, $b=inf\left\{x\ge 1;l\left(x\right)>0\right\}$ and

By the definition of ${\eta }_{j}$, we have $jl\left({\eta }_{j}\right)\le {\eta }_{j}^{2}$ and $jl\left({\eta }_{j}-\epsilon \right)>{\left({\eta }_{j}-\epsilon \right)}^{2}$ for any $\epsilon >0$. It implies that

(15)

For every $1\le i\le k\le n$, let

${\overline{X}}_{ki}=\left({X}_{i}-\mu \right)I\left(|{X}_{i}-\mu |\le {\eta }_{k}\right),\phantom{\rule{2em}{0ex}}{\overline{V}}_{k}^{2}=\sum _{i=1}^{k}{\overline{X}}_{ki}^{2},\phantom{\rule{2em}{0ex}}{\overline{S}}_{k,k}=\sum _{i=1}^{k}{b}_{i,k}{\overline{X}}_{ki}.$

Lemma 2.3 Suppose that the assumptions of Theorem  1.1 hold. Then (16) (17) (18)

where ${d}_{k}$ and ${D}_{n}$ are defined by (6) and f is a non-negative, bounded Lipschitz function.

Proof By the central limit theorem for i.i.d. random variables and $Var{\overline{S}}_{n,n}\sim 2nl\left({\eta }_{n}\right)$ as $n\to \mathrm{\infty }$ from ${\sum }_{k=1}^{n}{b}_{k,n}^{2}\sim 2n$, it follows that

where $\mathcal{N}$ denotes the standard normal random variable. This implies that for any $g\left(x\right)$ which is a non-negative, bounded Lipschitz function,

Hence, we obtain

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right)=\mathbb{E}g\left(\mathcal{N}\right)$

from the Toeplitz lemma.

On the other hand, note that (16) is equivalent to

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right)=\mathbb{E}g\left(\mathcal{N}\right)\phantom{\rule{1em}{0ex}}\text{a.s.}$

from Theorem 7.1 of Billingsley  and Section 2 of Peligrad and Shao . Hence, to prove (16), it suffices to prove

$\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\left(g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right)-\mathbb{E}g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right)\right)=0\phantom{\rule{1em}{0ex}}\text{a.s.}$
(19)

for any $g\left(x\right)$ which is a non-negative, bounded Lipschitz function.

For any $k\ge 1$, let

${\xi }_{k}=g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right)-\mathbb{E}g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right).$

For any $1\le k, note that $g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right)$ and $g\left(\frac{{\overline{S}}_{j,j}-\mathbb{E}{\overline{S}}_{j,j}-{\sum }_{i=1}^{k}{b}_{i,j}\left({\overline{X}}_{ji}-\mathbb{E}{\overline{X}}_{ji}\right)}{\sqrt{2jl\left({\eta }_{j}\right)}}\right)$ are independent and $g\left(x\right)$ is a non-negative, bounded Lipschitz function. By the definition of ${\eta }_{j}$, we get

$\begin{array}{rcl}|\mathbb{E}{\xi }_{k}{\xi }_{j}|& =& |Cov\left(g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right),g\left(\frac{{\overline{S}}_{j,j}-\mathbb{E}{\overline{S}}_{j,j}}{\sqrt{2jl\left({\eta }_{j}\right)}}\right)\right)|\\ =& |Cov\left(g\left(\frac{{\overline{S}}_{k,k}-\mathbb{E}{\overline{S}}_{k,k}}{\sqrt{2kl\left({\eta }_{k}\right)}}\right),g\left(\frac{{\overline{S}}_{j,j}-\mathbb{E}{\overline{S}}_{j,j}}{\sqrt{2jl\left({\eta }_{j}\right)}}\right)\\ -g\left(\frac{{\overline{S}}_{j,j}-\mathbb{E}{\overline{S}}_{j,j}-{\sum }_{i=1}^{k}{b}_{i,j}\left({\overline{X}}_{ji}-\mathbb{E}{\overline{X}}_{ji}\right)}{\sqrt{2jl\left({\eta }_{j}\right)}}\right)\right)|\\ \le & c\frac{\mathbb{E}|{\sum }_{i=1}^{k}{b}_{i,j}\left({\overline{X}}_{ji}-\mathbb{E}{\overline{X}}_{ji}\right)|}{\sqrt{jl\left({\eta }_{j}\right)}}\le c\frac{\sqrt{\mathbb{E}{\left({\sum }_{i=1}^{k}{b}_{i,j}\left({\overline{X}}_{ji}-\mathbb{E}{\overline{X}}_{ji}\right)\right)}^{2}}}{\sqrt{jl\left({\eta }_{j}\right)}}\le c\frac{\sqrt{{\sum }_{i=1}^{k}{b}_{i,j}^{2}\mathbb{E}{\overline{X}}_{ji}^{2}}}{\sqrt{jl\left({\eta }_{j}\right)}}\\ \le & c\frac{\sqrt{{\sum }_{i=1}^{k}{\left({b}_{i,k}+{b}_{k+1,j}\right)}^{2}l\left({\eta }_{j}\right)}}{\sqrt{jl\left({\eta }_{j}\right)}}\le c\frac{\sqrt{{\sum }_{i=1}^{k}{b}_{i,k}^{2}+{\sum }_{i=1}^{k}{b}_{k+1,j}^{2}}}{\sqrt{j}}\\ \le & c\frac{\sqrt{k+k{ln}^{2}\left(j/k\right)}}{\sqrt{j}}\le c{\left(\frac{k}{j}\right)}^{1/4}.\end{array}$

By Lemma 2.2, (19) holds.

Now we prove (17). Let

It is known that $I\left(A\cup B\right)-I\left(B\right)\le I\left(A\right)$ for any sets A and B. Then for $1\le k, by Lemma 2.1(ii) and (15), we get

$\mathbb{P}\left(|X-\mu |>{\eta }_{j}\right)=o\left(1\right)\frac{l\left({\eta }_{j}\right)}{{\eta }_{j}^{2}}=\frac{o\left(1\right)}{j}.$
(20)

Hence

$\begin{array}{rcl}|\mathbb{E}{Z}_{k}{Z}_{j}|& =& |Cov\left(I\left(\bigcup _{i=1}^{k}\left(|{X}_{i}-\mu |>{\eta }_{k}\right)\right),I\left(\bigcup _{i=1}^{j}\left(|{X}_{i}-\mu |>{\eta }_{j}\right)\right)\right)|\\ =& |Cov\left(I\left(\bigcup _{i=1}^{k}\left(|{X}_{i}-\mu |>{\eta }_{k}\right)\right),I\left(\bigcup _{i=1}^{j}\left(|{X}_{i}-\mu |>{\eta }_{j}\right)\right)\\ -I\left(\bigcup _{i=k+1}^{j}\left(|{X}_{i}-\mu |>{\eta }_{j}\right)\right)\right)|\\ \le & \mathbb{E}|I\left(\bigcup _{i=1}^{j}\left(|{X}_{i}-\mu |>{\eta }_{j}\right)\right)-I\left(\bigcup _{i=k+1}^{j}\left(|{X}_{i}-\mu |>{\eta }_{j}\right)\right)|\\ \le & \mathbb{E}I\left(\bigcup _{i=1}^{k}\left(|{X}_{i}-\mu |>{\eta }_{j}\right)\right)\le k\mathbb{P}\left(|X-\mu |>{\eta }_{j}\right)\\ \le & \frac{k}{j}.\end{array}$

By Lemma 2.2, (17) holds.

Finally, we prove (18). Let

For $1\le k,

$\begin{array}{rcl}|\mathbb{E}{\zeta }_{k}{\zeta }_{j}|& =& |Cov\left(f\left(\frac{{\overline{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right),f\left(\frac{{\overline{V}}_{j}^{2}}{jl\left({\eta }_{j}\right)}\right)\right)|\\ =& |Cov\left(f\left(\frac{{\overline{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right),f\left(\frac{{\overline{V}}_{j}^{2}}{jl\left({\eta }_{j}\right)}\right)-f\left(\frac{{\overline{V}}_{j}^{2}-{\sum }_{i=1}^{k}{\left({X}_{i}-\mu \right)}^{2}I\left(|{X}_{i}-\mu |\le {\eta }_{j}\right)}{jl\left({\eta }_{j}\right)}\right)\right)|\\ \le & c\frac{\mathbb{E}\left({\sum }_{i=1}^{k}{\left({X}_{i}-\mu \right)}^{2}I\left(|{X}_{i}-\mu |\le {\eta }_{j}\right)\right)}{jl\left({\eta }_{j}\right)}=c\frac{k\mathbb{E}{\left(X-\mu \right)}^{2}I\left(|X-\mu |\le {\eta }_{j}\right)}{jl\left({\eta }_{j}\right)}=c\frac{kl\left({\eta }_{j}\right)}{jl\left({\eta }_{j}\right)}\\ =& c\frac{k}{j}.\end{array}$

By Lemma 2.2, (18) holds. This completes the proof of Lemma 2.3. □

Proof of Theorem 1.1 Let ${U}_{i}={S}_{i}/\left(i\mu \right)$; then (7) is equivalent to

(21)

Let $q\in \left(4/3,2\right)$, then $\mathbb{E}|X|<\mathrm{\infty }$ and $\mathbb{E}{|X|}^{q}<\mathrm{\infty }$ from Remark 1.4. Using the Marcinkiewicz-Zygmund strong large number law, we have

${U}_{k}-1=\frac{{S}_{k}-\mu k}{k\mu }\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}$
${S}_{k}-\mu k=o\left({k}^{1/q}\right)\phantom{\rule{1em}{0ex}}\text{a.s.}$

Hence let ${a}_{k}=\sqrt{2\left(1±\epsilon \right)kl\left({\eta }_{k}\right)}$ for any given $0<\epsilon <1$, by $|ln\left(1+x\right)-x|=O\left({x}^{2}\right)$ for $|x|<1/2$,

from $3/2-2/q>0$, $l\left(x\right)$ is a slowly varying function at ∞, and ${\eta }_{k}\le k+1$.

Therefore, for any $\delta >0$ and almost every event ω, there exists ${k}_{0}={k}_{0}\left(\omega ,\delta ,x\right)$ such that for $k>{k}_{0}$,

$\begin{array}{rcl}I\left(\frac{\mu }{{a}_{k}}\sum _{i=1}^{k}\left({U}_{i}-1\right)\le x-\delta \right)& \le & I\left(\frac{\mu }{{a}_{k}}\sum _{i=1}^{k}ln{U}_{i}\le x\right)\\ \le & I\left(\frac{\mu }{{a}_{k}}\sum _{i=1}^{k}\left({U}_{i}-1\right)\le x+\delta \right).\end{array}$
(22)

Note that under the condition $|{X}_{j}-\mu |\le {\eta }_{k}$, $1\le j\le k$,

$\begin{array}{rcl}\mu \sum _{i=1}^{k}\left({U}_{i}-1\right)& =& \sum _{i=1}^{k}\frac{{S}_{i}-i\mu }{i}=\sum _{i=1}^{k}\frac{1}{i}\sum _{j=1}^{i}\left({X}_{j}-\mu \right)\\ =& \sum _{j=1}^{k}\sum _{i=j}^{k}\frac{1}{i}{\overline{X}}_{kj}=\sum _{j=1}^{k}{b}_{j,k}{\overline{X}}_{kj}={\overline{S}}_{k,k}.\end{array}$
(23)

Thus, by (22) and (23), for any given $0<\epsilon <1$, $\delta >0$, we have for $k>{k}_{0}$, and Hence, to prove (21), it suffices to prove (24) (25) (26) (27)

for any $0<\epsilon <1$ and ${\delta }_{1}>0$.

Firstly, we prove (24). Let $0<\beta <1/2$, and let $h\left(\cdot \right)$ be a real function such that for any given $x\in \mathbb{R}$,

$I\left(y\le \sqrt{1±\epsilon }x±{\delta }_{1}-\beta \right)\le h\left(y\right)\le I\left(y\le \sqrt{1±\epsilon }x±{\delta }_{1}+\beta \right).$
(28)

By $\mathbb{E}\left({X}_{i}-\mu \right)=0$, Lemma 2.1(iii) and (15), we have

$\begin{array}{rcl}|\mathbb{E}{\overline{S}}_{k,k}|& =& |\mathbb{E}\sum _{i=1}^{k}{b}_{i,k}\left({X}_{i}-\mu \right)I\left(|{X}_{i}-\mu |\le {\eta }_{k}\right)|\le \sum _{i=1}^{k}{b}_{i,k}\mathbb{E}|{X}_{i}-\mu |I\left(|{X}_{i}-\mu |>{\eta }_{k}\right)\\ =& \sum _{i=1}^{k}\sum _{j=i}^{k}\frac{1}{j}\mathbb{E}|X-\mu |I\left(|X-\mu |>{\eta }_{k}\right)=\sum _{j=1}^{k}\sum _{i=1}^{j}\frac{1}{j}\frac{o\left(l\left({\eta }_{k}\right)\right)}{{\eta }_{k}}\\ =& o\left(\sqrt{kl\left({\eta }_{k}\right)}\right).\end{array}$

This, combining with (16), (28) and the arbitrariness of β in (28), (24), holds.

By (17), (20) and the Toeplitz lemma,

$\begin{array}{rcl}0& \le & \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left(\bigcup _{i=1}^{k}\left(|{X}_{i}-\mu |>{\eta }_{k}\right)\right)\sim \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}I\left(\bigcup _{i=1}^{k}\left(|{X}_{i}-\mu |>{\eta }_{k}\right)\right)\\ \le & \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}k\mathbb{P}\left(|X-\mu |>{\eta }_{k}\right)\to 0\phantom{\rule{1em}{0ex}}\text{a.s.}\end{array}$

Hence, (25) holds.

Now we prove (26). For any $\lambda >0$, let f be a non-negative, bounded Lipschitz function such that

$I\left(x>1+\lambda \right)\le f\left(x\right)\le I\left(x>1+\lambda /2\right).$

From $\mathbb{E}{\overline{V}}_{k}^{2}=kl\left({\eta }_{k}\right)$, ${\overline{X}}_{ni}$ is i.i.d., Lemma 2.1(iv), and (15),

$\begin{array}{rcl}\mathbb{P}\left({\overline{V}}_{k}^{2}>\left(1+\frac{\lambda }{2}\right)kl\left({\eta }_{k}\right)\right)& =& \mathbb{P}\left({\overline{V}}_{k}^{2}-\mathbb{E}{\overline{V}}_{k}^{2}>\lambda kl\left({\eta }_{k}\right)/2\right)\\ \le & c\frac{\mathbb{E}{\left({\overline{V}}_{k}^{2}-\mathbb{E}{\overline{V}}_{k}^{2}\right)}^{2}}{{k}^{2}{l}^{2}\left({\eta }_{k}\right)}\le c\frac{\mathbb{E}{\left(X-\mu \right)}^{4}I\left(|X-\mu |\le {\eta }_{k}\right)}{k{l}^{2}\left({\eta }_{k}\right)}\\ =& \frac{o\left(1\right){\eta }_{k}^{2}}{kl\left({\eta }_{k}\right)}=o\left(1\right)\to 0.\end{array}$

Therefore, from (18) and the Toeplitz lemma,

$\begin{array}{rcl}0& \le & \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}I\left({\overline{V}}_{k}^{2}>\left(1+\lambda \right)kl\left({\eta }_{k}\right)\right)\le \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}f\left(\frac{{\overline{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)\\ \sim & \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}f\left(\frac{{\overline{V}}_{k}^{2}}{kl\left({\eta }_{k}\right)}\right)\le \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{E}I\left({\overline{V}}_{k}^{2}>\left(1+\lambda /2\right)kl\left({\eta }_{k}\right)\right)\\ =& \frac{1}{{D}_{n}}\sum _{k=1}^{n}{d}_{k}\mathbb{P}\left({\overline{V}}_{k}^{2}>\left(1+\lambda /2\right)kl\left({\eta }_{k}\right)\right)\\ \to & 0\phantom{\rule{1em}{0ex}}\text{a.s.}\end{array}$

Hence, (26) holds. By similar methods used to prove (26), we can prove (27). This completes the proof of Theorem 1.1. □

## Authors’ information

Qunying Wu, Professor, Doctor, working in the field of probability and statistics.

## References

1. Arnold BC, Villaseñor JA: The asymptotic distribution of sums of records. Extremes 1998, 1(3):351–363.

2. Berkes I, Csáki E: A universal result in almost sure central limit theory. Stoch. Process. Appl. 2001, 94: 105–134. 10.1016/S0304-4149(01)00078-3

3. Billingsley P: Convergence of Probability Measures. Wiley, New York; 1968.

4. Brosamler GA: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 1988, 104: 561–574. 10.1017/S0305004100065750

5. Chandrasekharan K, Minakshisundaram S: Typical Means. Oxford University Press, Oxford; 1952.

6. Csörgo M, Szyszkowicz B, Wang QY: Donsker’s theorem for self-normalized partial sums processes. Ann. Probab. 2003, 31(3):1228–1240. 10.1214/aop/1055425777

7. Gine E, Götze F, Mason DM: When is the Student t -statistic asymptotically standard normal? Ann. Probab. 1997, 25: 1514–1531.

8. Gonchigdanzan K, Rempala G: A note on the almost sure central limit theorem for the product of partial sums. Appl. Math. Lett. 2006, 19: 191–196. 10.1016/j.aml.2005.06.002

9. Hörmann S: Critical behavior in almost sure central limit theory. J. Theor. Probab. 2007, 20: 613–636. 10.1007/s10959-007-0080-3

10. Huang SH, Pang TX: An almost sure central limit theorem for self-normalized partial sums. Comput. Math. Appl. 2010, 60: 2639–2644. 10.1016/j.camwa.2010.08.093

11. Ibragimov IA, Lifshits M: On the convergence of generalized moments in almost sure central limit theorem. Stat. Probab. Lett. 1998, 40: 343–351. 10.1016/S0167-7152(98)00134-5

12. Lacey MT, Philipp W: A note on the almost sure central limit theorem. Stat. Probab. Lett. 1990, 9: 201–205. 10.1016/0167-7152(90)90056-D

13. Miao Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. Math. Sci. 2008, 118(2):289–294. 10.1007/s12044-008-0021-9

14. Pang TX, Lin ZY, Hwang KS: Asymptotics for self-normalized random products of sums of i.i.d. random variables. J. Math. Anal. Appl. 2007, 334: 1246–1259. 10.1016/j.jmaa.2006.12.085

15. Peligrad M, Shao QM: A note on the almost sure central limit theorem for weakly dependent random variables. Stat. Probab. Lett. 1995, 22: 131–136. 10.1016/0167-7152(94)00059-H

16. Rempala G, Wesolowski J: Asymptotics for products of sums and U -statistics. Electron. Commun. Probab. 2002, 7: 47–54.

17. Schatte P: On strong versions of the central limit theorem. Math. Nachr. 1988, 137: 249–256. 10.1002/mana.19881370117

18. Wu QY: Almost sure limit theorems for stable distribution. Stat. Probab. Lett. 2011, 81(6):662–672. 10.1016/j.spl.2011.02.003

19. Wu QY: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc. Indian Acad. Sci. Math. Sci. 2011, 121(3):369–377. 10.1007/s12044-011-0036-5

20. Wu QY: Almost sure central limit theory for products of sums of partial sums. Appl. Math. J. Chin. Univ. Ser. B 2012, 27(2):169–180.

21. Wu QY: A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law. J. Inequal. Appl. 2012., 2012: Article ID 17. doi:10.1186/1029–242X-2012–17

22. Ye DX, Wu QY: Almost sure central limit theorem for product of partial sums of strongly mixing random variables. J. Inequal. Appl. 2011., 2011: Article ID 576301

23. Zhang Y, Yang XY: An almost sure central limit theorem for self-normalized products of sums of i.i.d. random variables. J. Math. Anal. Appl. 2011, 376: 29–41. 10.1016/j.jmaa.2010.10.021

## Acknowledgements

The authors are very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. Supported by the National Natural Science Foundation of China (11061012), and the support Program of the Guangxi China Science Foundation (2012GXNSFAA053010).

## Author information

Authors

### Corresponding author

Correspondence to Qunying Wu.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

QW conceived of the study and drafted, complete the manuscript. PC participated in the discussion of the manuscript. QW and PC read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Wu, Q., Chen, P. An improved result in almost sure central limit theorem for self-normalized products of partial sums. J Inequal Appl 2013, 129 (2013). https://doi.org/10.1186/1029-242X-2013-129 