# A generalization of almost sure local limit theorem of uniform empirical process

## Abstract

Let $$\{X_{n}; n\geq1\}$$ be a sequence of independent and identically distributed $$U[0, 1]$$-distributed random variables. In this paper, we are concerned with the almost sure local central limit theorem of $$\|F_{n}\|$$ and $$\sup_{0\leq t\leq1}F_{n}(t)$$, and some corresponding results are derived.

## Introduction

Throughout this paper, let $$\{X_{n}; n\geq1\}$$ be a sequence of independent and identically distributed $$U[0, 1]$$-distributed random variables and put $$S_{n}=\sum_{k=1}^{n}X_{k}$$. Define the uniform empirical process $$F_{n}(t)=n^{-\frac{1}{2}}\sum_{i=1}^{n}(I_{\{X_{i}\leq t\}}-t)$$, $$0\leq t\leq1$$, $$\|F_{n}\|=\sup_{0\leq t\leq1}|F_{n}(t)|$$. It is well known that there has been recently a lively interest in probability theory concerning almost sure versions of classical limit theorems. The prototype of such a theorem is the almost sure central limit theorem (ASCLT), which has the simplest form as follows:

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n}\frac{1}{k} I\biggl\{ \frac{S_{k}}{\sqrt{k}} \leq x\biggr\} =\Phi(x)\quad \mbox{a.s.}$$
(1.1)

for all $$x\in R$$; here and in the sequel, $$I\{A\}$$ is the indicator function of the event A and $$\Phi(x)$$ stands for the standard normal distribution function. This result was firstly proved independently by Brosamler  and Schatte  under a stronger moment condition. Since then, this type of almost sure version, which mainly dealt with logarithmic average limit theorems, has been extended in various directions.

Especially, Fahrner and Stadtmüller  and Cheng et al.  extended this almost sure convergence for partial sums to the case of maxima of independent and identically distributed (i.i.d.) random variables. Under some suitable conditions, they proved the following:

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n}\frac{1}{k} I\biggl\{ \frac{M_{k}-b_{k}}{a_{k}} \leq x\biggr\} =G(x) \quad \mbox{a.s.}$$
(1.2)

for all $$x\in R$$, where $$a_{k}>0$$ and $$b_{k}\in R$$ satisfy

$$\lim_{k\rightarrow\infty}P\biggl(\frac{M_{k}-b_{k}}{a_{k}}\leq x\biggr)=G(x)$$

for any continuity point x of G.

For Gaussian sequences, Csáki and Gonchigdanzan  presented the validity of (1.2) for maxima of stationary Gaussian sequences under some mild conditions. Furthermore, Chen and Lin  extended it to non-stationary Gaussian sequences. As for some other dependent random variables, Peligrad and Shao  and Dudziński  derived some corresponding results about an almost sure central limit theorem. The almost sure central limit theorem in a joint version for log average in the case of independent and identically distributed random variables was obtained by Peng et al. , a joint version of almost sure limit theorem for log average of maxima and partial sums in the case of stationary Gaussian random variables was derived by Dudziński . In this direction, an extension of almost sure central limit theory was studied by Hörmann .

Moreover, Wu  explored the almost sure limit theorem for product of partial sums, stable distribution and product of sums of partial sums, respectively. Zang  derived the almost sure limit theorem of random fields for more general weights than the usual logarithmic average. Recently, Zhang  established the almost sure central limit theorem for uniform empirical processes with logarithmic average. And then, under some regular conditions, a general result of almost sure central limit theorem for uniform empirical processes with general weights was derived by Zang  with the methodology of Hörmann .

On the other hand, Chung and Erdős  proved the following result.

### Theorem A

Let $$X_{1},X_{2}, \ldots$$ be a sequence of i.i.d. integer valued random variables with $$EX_{1}=0$$. Assume that every integer a is a possible value of $$S_{k}$$ for all sufficiently large k. Then

$$\lim_{n\rightarrow\infty}\frac{1}{\log T_{n}}\sum_{k=1}^{n} \frac{P(S_{k}=a)}{T_{k}}=1\quad \textit{a.s.},$$

where

$$T_{n}=\sum_{k=1}^{n}P(S_{k}=a).$$

Further, if the condition $$EX_{1}^{2}=\sigma^{2}<\infty$$ is satisfied, we have $$T_{k}\sim \frac{\sqrt{2k}}{\sigma\sqrt{\pi}}$$, therefore

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^{n} \frac{P(S_{k}=0)}{\sqrt{k}}=\frac{\sqrt{2}}{\sigma \sqrt{\pi}} \quad \textit{a.s.}$$

This result may be called almost sure local central limit theorem (ASLCLT), while (1.1) may be called almost sure global central limit theorem.

A more general version of this theorem was proved by Csáki et al.  with finite third moment, they derived

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^{n} \frac{I\{a_{k}\leq S_{k}< b_{k}\}}{kP\{a_{k}\leq S_{k}<b_{k}\}}=1 \quad \mbox{a.s.}$$

if

$$\sum_{k=1}^{n}\frac{\log k}{k^{\frac{3}{2}}P\{a_{k}\leq S_{k}< b_{k}\}}=O(\log n) \quad \mbox{as }n\rightarrow\infty,$$

where $$\{a_{n}, n\geq1\}$$ and $$\{b_{n}, n\geq1\}$$ are both real sequences such that $$a_{n}\leq0\leq b_{n}$$ for $$n\geq1$$.

In this paper, under some mild conditions, we are concerned with the almost sure local central limit theorems of $$\|F_{n}\|$$ and $$\sup_{0\leq t\leq1}F_{n}(t)$$, which is inspired by the above result, especially Csáki et al.  and the references therein concerning the almost sure local central limit theorem.

The rest of this paper is organized as follows. In Section 2, a generalization of almost sure local central limit theorem of uniform empirical process is formulated. In Section 3, proofs of our main results are established. In Section 4, the paper is concluded and some statistical applications for future research are outlined.

## Main results

In this section, let the real-valued sequences $$\{u_{n}, n\geq1\}$$, $$\{v_{n}, n\geq1\}$$ be such that $$u_{n}>v_{n}$$, and satisfy

$$\sum_{\substack{1\leq k< l\leq n\\ p_{k}p_{l}\neq0} }\frac{1}{l^{2}p_{k}p_{l}}=O(\log n),$$
(2.1)

where $$p_{k}=P\{v_{k}<\|F_{k}\|\leq u_{k}\}$$. Set

$$\alpha_{k}=\left \{ \begin{array}{l@{\quad}l} \frac{1}{p_{k}}I\{v_{k} < \|F_{k}\|\leq u_{k}\},&p_{k}\neq0, \\ 1,&p_{k}=0 \end{array} \right .$$

and

$$\beta_{k}=\left \{ \begin{array}{l@{\quad}l} \frac{1}{q_{k}}I\{v_{k} < \sup_{0\leq t\leq1}F_{k}(t)\leq u_{k}\},&q_{k}\neq0, \\ 1,&q_{k}=0, \end{array} \right .$$

where $$q_{k}=P\{v_{k}<\sup_{0\leq t\leq1}F_{k}(t)\leq u_{k}\}$$ satisfies the corresponding condition (2.1).

### Theorem 2.1

Let $$\{X_{n}; n\geq1\}$$ be a sequence of independent and identically distributed (i.i.d.) random variables. Then

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n}\frac{\alpha_{k}}{k}=1 \quad \textit{a.s.}$$
(2.2)

### Theorem 2.2

Let $$\{X_{n}; n\geq1\}$$ be a sequence of independent and identically distributed (i.i.d.) random variables. Then

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n}\frac{\beta_{k}}{k}=1\quad \textit{a.s.}$$
(2.3)

### Remark 2.3

We believe that condition (2.1) can be weakened through more complicated calculating procedures, so we will study it in the future work.

## The proofs of the main results

In this section, we shall give some auxiliary lemmas which will be used to prove our main result. The first lemma comes from Gonchigdanzan .

### Lemma 3.1

Assume that $$\xi_{1}, \xi_{2},\ldots$$ are random variables such that $$E\xi_{i}=1$$ for $$k=1,2,\ldots$$ . Then

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}E \Biggl(\sum _{k=1}^{n}\frac{\xi_{k}}{k} \Biggr)=1.$$

Furthermore, if $$\xi_{k}\geq0$$ for $$k\geq1$$ and

$$\operatorname{Var} \Biggl(\sum_{k=1}^{n} \frac{\xi_{k}}{k} \Biggr)\ll(\log n)^{2-\varepsilon}$$

for some $$\varepsilon>0$$ and large enough n, then

$$\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^{n} \frac{\alpha_{k}}{k}=1 \quad \textit{a.s.}$$

Here and in the sequel, $$a_{n}\ll b_{n}$$ denotes $$\limsup_{n\rightarrow\infty}|a_{n}/b_{n}|<\infty$$.

### Lemma 3.2

If $$X_{1}, X_{2},\ldots$$ are i.i.d. random variables with common distributed function F. Denote $$D_{n}=\sup_{x}|\frac{1}{n}\sum_{i=1}^{n}I\{X_{i}\leq x \}-F(x)|$$, then there exist positive constants $$c_{0}$$ and c such that, for all n, all F, and all positive r, one has

$$1-P\bigl(D_{n}< r/n^{1/2}\bigr)<c_{0}e^{-cr^{2}}.$$

### Proof

This lemma is due to Kiefer and Wolfowitz . □

Now, we give the proofs of our main results.

### Proof of Theorem 2.1

Denote

\begin{aligned}& F_{k,l}(t)=l^{-\frac{1}{2}}\sum_{i=k+1}^{l}(I_{\{X_{i}\leq t\}}-t), \quad 0\leq t\leq1, \\& \|F_{k,l}\|=\sup_{0\leq t\leq1}\bigl\vert F_{k,l}(t)\bigr\vert . \end{aligned}

Firstly, by Lemma 3.1, it only needs to verify

$$\operatorname{Var} \Biggl(\sum_{k=1}^{n} \frac{\alpha_{k}}{k} \Biggr)\ll(\log n)^{2-\varepsilon}.$$

Note that

$$\operatorname{Var} \Biggl(\sum_{k=1}^{n} \frac{\alpha_{k}}{k} \Biggr)=\sum_{k=1}^{n} \frac{\operatorname{Var}(\alpha_{k})}{k^{2}}+2\sum_{1\leq k< l\leq n }\frac{\operatorname{Cov}(\alpha_{k},\alpha_{l})}{kl}.$$

Furthermore, observe that if $$p_{k}=0$$, then

$$\operatorname{Var}(\alpha_{k})=0;$$

if $$p_{k}\neq0$$, then

$$\operatorname{Var}(\alpha_{k})=\frac{1-p_{k}}{p_{k}}\leq \frac{1}{p_{k}},$$

and consequently

$$\sum_{k=1}^{n}\frac{\operatorname{Var}(\alpha_{k})}{k^{2}}\leq\sum _{k=1}^{n}\frac {1}{k^{2}p_{k}}\leq\sum _{1\leq k< l\leq n }\frac{1}{l^{2}p_{k}p_{l}}\ll\log n;$$

if $$p_{k}p_{l}=0$$, then

$$\operatorname{Cov}(\alpha_{k},\alpha_{l})=0;$$

if $$p_{k}p_{l}\neq0$$, we have

\begin{aligned}& \operatorname{Cov}\bigl(I\bigl\{ v_{k}< \Vert F_{k} \Vert \leq u_{k}\bigr\} ,I\bigl\{ v_{l} <\Vert F_{l}\Vert \leq u_{l}\bigr\} \bigr) \\& \quad \leq\bigl\vert \operatorname{Cov}\bigl(I\bigl\{ \Vert F_{k} \Vert \leq u_{k}\bigr\} ,I\bigl\{ \Vert F_{l}\Vert \leq u_{l}\bigr\} \bigr)\bigr\vert \\& \qquad {}+\bigl\vert \operatorname{Cov}\bigl(I \bigl\{ \Vert F_{k}\Vert \leq u_{k}\bigr\} ,I\bigl\{ \Vert F_{l}\Vert \leq v_{l}\bigr\} \bigr)\bigr\vert \\& \qquad {}+\bigl\vert \operatorname{Cov}\bigl(I\bigl\{ \Vert F_{k} \Vert \leq v_{k}\bigr\} ,I\bigl\{ \Vert F_{l}\Vert \leq u_{l}\bigr\} \bigr)\bigr\vert \\& \qquad {}+\bigl\vert \operatorname{Cov}\bigl(I \bigl\{ \Vert F_{k}\Vert \leq v_{k}\bigr\} ,I\bigl\{ \Vert F_{l}\Vert \leq v_{l}\bigr\} \bigr)\bigr\vert \\& \quad =:A_{1}+A_{2}+A_{3}+A_{4}. \end{aligned}

For $$A_{1}$$, $$k\leq l$$, it follows that

\begin{aligned} A_{1} =&\bigl\vert \operatorname{Cov}\bigl(I\bigl\{ \Vert F_{k}\Vert \leq u_{k}\bigr\} ,I\bigl\{ \Vert F_{l}\Vert \leq u_{l}\bigr\} \bigr)\bigr\vert \\ \leq&\bigl\vert \operatorname{Cov}\bigl(I\bigl\{ \Vert F_{k} \Vert \leq u_{k}\bigr\} ,I\bigl\{ \Vert F_{l}\Vert \leq u_{l}\bigr\} -I\bigl\{ \Vert F_{k,l}\Vert \leq u_{l}\bigr\} \bigr)\bigr\vert \\ &{}+\bigl\vert \operatorname{Cov}\bigl(I \bigl\{ \Vert F_{k}\Vert \leq u_{k}\bigr\} ,I\bigl\{ \Vert F_{k,l}\Vert \leq u_{l}\bigr\} \bigr)\bigr\vert \\ \ll& E\bigl\vert I\bigl\{ \Vert F_{l}\Vert \leq u_{l} \bigr\} -I\bigl\{ \Vert F_{k,l}\Vert \leq u_{l}\bigr\} \bigr\vert \\ &{}+\bigl\vert \operatorname{Cov}\bigl(I\bigl\{ \Vert F_{k} \Vert \leq u_{k}\bigr\} ,I\bigl\{ \Vert F_{k,l}\Vert \leq u_{l}\bigr\} \bigr)\bigr\vert \\ =:&A_{11}+A_{12}. \end{aligned}

Making use of Lemma 3.2 and Fubini’s theorem, we have

\begin{aligned} A_{11} \ll& E\bigl\vert \Vert F_{l}\Vert -\Vert F_{k,l}\Vert \bigr\vert \\ \ll& E\Bigl\vert \sup_{0\leq t\leq1}\bigl\vert F_{l}(t) \bigr\vert -\sup_{0\leq t\leq1}\bigl\vert F_{k,l}(t)\bigr\vert \Bigr\vert \\ \ll& E\sup_{0\leq t\leq1}\bigl\vert \bigl\vert F_{l}(t) \bigr\vert -\bigl\vert F_{k,l}(t)\bigr\vert \bigr\vert \\ \ll& E\sup_{0\leq t\leq1}\bigl\vert F_{l}(t)-F_{k,l}(t) \bigr\vert \\ =&E\sup_{0\leq t\leq1}l^{-\frac{1}{2}}\Biggl\vert \sum _{i=1}^{k}(I_{\{X_{i}\leq t\}}-t)\Biggr\vert \\ =&\biggl(\frac{k}{l}\biggr)^{\frac{1}{2}}E\sup_{0\leq t\leq1}k^{-\frac{1}{2}} \Biggl\vert \sum_{i=1}^{k}(I_{\{X_{i}\leq t\}}-t) \Biggr\vert \\ =&\biggl(\frac{k}{l}\biggr)^{\frac{1}{2}}\int_{0}^{\infty}P \Biggl\{ \sup_{0\leq t\leq1}k^{-\frac{1}{2}}\Biggl\vert \sum _{i=1}^{k}(I_{\{X_{i}\leq t\}}-t)\Biggr\vert >x\Biggr\} \,dx \\ \ll&\biggl(\frac{k}{l}\biggr)^{\frac{1}{2}}\int_{0}^{\infty}e^{-cx^{2}} \,dx \\ \ll&\biggl(\frac{k}{l}\biggr)^{\frac{1}{2}}. \end{aligned}

In virtue of the independence of $$\{X_{n};n\geq1\}$$, we have $$A_{12}=0$$. Then

$$A_{1}\ll\biggl(\frac{k}{l}\biggr)^{\frac{1}{2}}.$$

Furthermore, by similar reasoning, we have

$$A_{i}\ll\frac{k}{l},\quad i=2,3,4,$$

and therefore,

$$A_{2}+A_{3}+A_{4}\ll\frac{k}{l}.$$

Thus, according to our assumptions, we can derive

\begin{aligned}& \sum_{1\leq k< l\leq n }\frac{\operatorname{Cov}(I\{v_{k}< R_{k}\leq u_{k}\},I\{v_{l} <R_{l}\leq u_{l}\} )}{klp_{k}p_{l}} \\& \quad \leq\sum_{1\leq k<l\leq n }\frac{A_{1}+A_{2}+A_{3}+A_{4}}{klp_{k}p_{l}} \\& \quad \ll\sum_{1\leq k<l\leq n}\frac{1}{klp_{k}p_{l}}\biggl( \frac {k}{l}\biggr)^{\frac{1}{2}} \\& \quad =\sum_{1\leq k<l\leq n }\frac{1}{\sqrt{k}l^{3/2}p_{k}p_{l}} \\& \quad =O(\log n). \end{aligned}

Hence, $$\operatorname{Var} (\sum_{k=1}^{n}\frac{\alpha_{k}}{k} )\ll(\log n)$$. Further, we complete the proof of Theorem 2.1. □

### Proof of Theorem 2.2

The proof is similar to the above procedures, so we omit it here. □

## Concluding remarks

In this paper, we are concerned with the limit theory of uniform empirical process. A generalization of almost sure local central limit theorem of uniform empirical process has been established.

Some statistical applications related to our main result deserve further investigation. By virtue of being a new approach of testing based on ASCLT, Thangavelu  investigated hypothesis testing via the estimation of quantiles of the distribution of the concerned statistics. Based on the theorem on ASCLT for rank statistics, he also proposed a small-sample approximation for the two-sample nonparametric Behrens-Fisher problem. These statistical applications concerning our work will be discussed in the future work.

## References

1. 1.

Brosamler, GA: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 104, 561-574 (1988)

2. 2.

Schatte, P: On strong versions of the central limit theorem. Math. Nachr. 137, 249-256 (1988)

3. 3.

Fahrner, I, Stadtmüller, U: On almost sure max-limit theorems. Stat. Probab. Lett. 37, 229-236 (1998)

4. 4.

Cheng, SH, Peng, L, Qi, YC: Almost sure convergence in extreme value theory. Math. Nachr. 190, 43-50 (1998)

5. 5.

Csáki, E, Gonchigdanzan, K: Almost sure limit theorems for the maximum of stationary Gaussian sequences. Stat. Probab. Lett. 58, 195-203 (2002)

6. 6.

Chen, SQ, Lin, ZY: Almost sure max-limits for nonstationary Gaussian sequence. Stat. Probab. Lett. 76, 1175-1184 (2006)

7. 7.

Peligrad, M, Shao, QM: A note on the almost sure central limit theorem for weakly dependent random variables. Stat. Probab. Lett. 22, 131-136 (1995)

8. 8.

Dudziński, M: A note on the almost sure central limit theorem for some dependent random variables. Stat. Probab. Lett. 61, 31-40 (2003)

9. 9.

Peng, ZX, Wang, LL, Nadarajah, S: Almost sure central limit theorem for partial sums and maxima. Math. Nachr. 282, 632-636 (2009)

10. 10.

Dudziński, M: The almost sure central limit theorems in the joint version for the maxima and sums of certain stationary Gaussian sequences. Stat. Probab. Lett. 78, 347-357 (2008)

11. 11.

Hörmann, S: An extension of almost sure central limit theory. Stat. Probab. Lett. 76, 191-202 (2006)

12. 12.

Wu, QY: An improved result in almost sure central limit theory for products of partial sums with stable distribution. Chin. Ann. Math., Ser. B 33(6), 919-930 (2012)

13. 13.

Wu, QY: Almost sure limit theorems for stable distributions. Stat. Probab. Lett. 81, 662-672 (2011)

14. 14.

Wu, QY: Almost sure central limit theory for products of sums of partial sums. Stat. Probab. Lett. 81, 662-672 (2011)

15. 15.

Zang, QP: A general result in almost sure central limit theory for random fields. Statistics 48, 965-970 (2014)

16. 16.

Zhang, Y: A note on almost sure central limit theorem for uniform empirical processes. J. Jilin Univ. Sci. Ed. 49, 687-689 (2011)

17. 17.

Zang, QP: A general result of the almost sure central limit theorem of uniform empirical process. Statistics 49, 98-103 (2015)

18. 18.

Chung, KL, Erdős, P: Probability limit theorems assuming only the first moment. Mem. Am. Math. Soc. 6, 1-19 (1951)

19. 19.

Csáki, E, Földes, A, Révész, P: On almost sure local and global central limit theorems. Probab. Theory Relat. Fields 97, 321-337 (1993)

20. 20.

Gonchigdanzan, K: Almost sure central limit theorems. Doctorate dissertation, University of Cincinnati, USA (2001)

21. 21.

Kiefer, J, Wolfowitz, J: On the deviations of the empiric distribution function of vector chance variables. Trans. Am. Math. Soc. 87, 173-186 (1958)

22. 22.

Thangavelu, K: Quantile estimation based on the almost sure central limit theorem. Dissertation, University of Göttingen, Germany (2005)

## Acknowledgements

This work is supported by the National Natural Science Foundation of China (Grant Nos. 11326174 and 11401245), the Natural Science Foundation of Jiangsu Province (Grant No. BK20130412), the Natural Science Research Project of Ordinary Universities in Jiangsu Province (Grant No. 12KJB110003).

## Author information

Authors

### Corresponding author

Correspondence to Chenglian Zhu. 