Skip to main content

Limit theorems for ratios of order statistics from uniform distributions

Abstract

Let \(\{X_{ni}, 1 \leq i \leq m_{n}, n\geq 1\}\) be an array of independent random variables with uniform distribution on \([0, \theta _{n}]\), and \(\{X_{n(k)}, k=1, 2, \ldots , m_{n}\}\) be the kth order statistics of the random variables \(\{X_{ni}, 1 \leq i \leq m_{n}\}\). We study the limit properties of ratios \(\{R_{nij}=X_{n(j)}/X_{n(i)}, 1\leq i < j \leq m_{n}\}\) for fixed sample size \(m_{n}=m\) based on their moment conditions. For \(1=i < j \leq m\), we establish the weighted law of large numbers, the complete convergence, and the large deviation principle, and for \(2=i < j \leq m\), we obtain some classical limit theorems and self-normalized limit theorems.

1 Introduction

Let \(\{X_{ni}, 1 \leq i \leq m_{n}, n\geq 1\}\) be an array of independent random variables with uniform distribution on \([0, \theta _{n}]\), and \(\{X_{n(k)}, k=1, 2, \ldots , m_{n}\}\) be the kth order statistics of the random variables \(\{X_{ni}, 1 \leq i \leq m_{n}\}\) for given n. In contrast to the commonly used models of order statistics, such as the extreme values, the median, the sample range, and the linear functions of order statistics, some researchers are interested in the investigation of limit behaviors for the ratios of these order statistics

$$ R_{nij}=\frac{X_{n(j)}}{X_{n(i)}},\quad 1\leq i< j\leq m_{n}. $$

To derive the density function of \(R_{nij}\), we recall that the joint density function of \(X_{n(i)}\) and \(X_{n(j)}\) is

$$ f(x_{i},x_{j})=\frac{m_{n}! x_{i}^{i-1}(x_{j}-x_{i})^{j-i-1}(\theta _{n}-x_{j})^{m_{n}-j}}{(i-1)!(j-i-1)!(m_{n}-j)!\theta _{n}^{m_{n}}}I(0 \leq x_{i} < x_{j} \leq \theta _{n}). $$

Let \(\omega =x_{i}\), \(r=x_{j}/x_{i}\). Then the Jacobian of the transformation is ω, so that the joint density function of \(X_{n(i)}\) and \(R_{nij}\) is

$$ f(r,\omega )=\frac{m_{n}! \omega ^{j-1}(r-1)^{j-i-1}(\theta _{n}-r \omega )^{m_{n}-j}}{(i-1)!(j-i-1)!(m_{n}-j)!\theta _{n}^{m_{n}}}I( \omega \geq 0, r >1, r\omega \leq \theta _{n}). $$

Therefore, the density function of the ratio \(R_{nij}\) is

$$\begin{aligned} f_{R_{nij}}(r) =&\frac{m_{n}!(r-1)^{j-i-1}}{(i-1)!(j-i-1)!(m_{n}-j)! \theta _{n}^{m_{n}}} \int _{0}^{\theta _{n}/r}\omega ^{j-1}(\theta _{n}-r \omega )^{m_{n}-j}\,d\omega \\ =&\frac{m_{n}!}{(i-1)!(j-i-1)!(m_{n}-j)!\theta _{n}^{m_{n}}}\frac{(r-1)^{j-i-1}}{r ^{j}} \int _{0}^{\theta _{n}}(\theta _{n}-u)^{j-1}u^{m_{n}-j}\,du \\ =&\frac{m_{n}!}{(i-1)!(j-i-1)!(m_{n}-j)!}\sum_{k=0}^{j-1} \frac{(-1)^{k} C_{j-1}^{k}}{m_{n}+k-j+1} \frac{(r-1)^{j-i-1}}{r^{j}}I(r >1), \end{aligned}$$
(1.1)

which is not related to \(\theta _{n}\). Thus, for fixed i, j and the sample size \(m_{n}=m\), \(\{R_{nij}, n\geq 1\}\) is a sequence of independent identically distributed (i.i.d.) random variables, which allows us to obtain better limit behaviors for \(\{R_{nij}, n\geq 1\}\), although the random variables in different rows of the array \(\{X_{ni}, 1 \leq i \leq m_{n}, n\geq 1\}\) possess different uniform distributions.

In this paper, some generalized results of \(\{R_{nij}, n\geq 1\}\) for the fixed sample size \(m_{n}=m\) are established based on the works by Adler [1], Miao et al. [7], and Xu and Miao [11]. In Sect. 2, we first derive the moments of \(R_{nij}\), on which the limit theorems for \(R_{n1j}\) and \(R_{n2j}\) are established in Sects. 3 and 4, respectively.

Throughout this paper, let \(\log x =\ln \max \{e,x\}\), where “ln” is the natural logarithm. Denote by C a generic positive real number, which is not necessarily the same value in each appearance, and by \(I(B)\) the indicator function of set B.

2 Moments of \(R_{nij}\)

It is known from (1.1) that the density function of \(R_{n1j}\) is

$$ f_{R_{n1j}}(r) = \gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j}}I(r >1), $$
(2.1)

where

$$ \gamma _{m,1,j}=\frac{m!}{(j-2)!(m-j)!}\sum_{k=0}^{j-1} \frac{(-1)^{k} C _{j-1}^{k}}{m+k-j+1}, $$

and the density function of \(R_{n2j}\) is

$$ f_{R_{n2j}}(r) = \gamma _{m,2,j} \frac{(r-1)^{j-3}}{r^{j}}I(r >1), $$
(2.2)

where

$$ \gamma _{m,2,j}=\frac{m!}{(j-3)!(m-j)!}\sum_{k=0}^{j-1} \frac{(-1)^{k} C _{j-1}^{k}}{m+k-j+1}. $$

Theorem 2.1

  1. (I)

    For \(1< j\leq m\), we have

    $$ \textstyle\begin{cases} \mathbb{E} R_{n1j}^{\beta }< \infty , & 0 < \beta < 1, \\ \mathbb{E} R_{n1j}^{\beta }= \infty , & \beta \geq 1. \end{cases} $$
  2. (II)

    For \(2< j\leq m\), we have

    $$ \textstyle\begin{cases} \mathbb{E} R_{n2j}^{\beta } < \infty , & 0 < \beta < 2, \\ \mathbb{E} R_{n2j}^{\beta } = \infty , & \beta \geq 2. \end{cases} $$
  3. (III)

    Let \(L(x)=\mathbb{E} R_{n2j}^{2} I(|R_{n2j}|\leq x)\)and \(\widetilde{L}(x)=\mathbb{E}(R_{n2j}-\mathbb{E} R_{n2j})^{2} I(|R_{n2j}- \mathbb{E} R_{n2j}|\leq x)\). Then both \(L(x)\)and \(\widetilde{L}(x)\)are slowly varying functions.

Proof

(I) It is easily known from (2.1) that \(f_{R_{n1j}}(r)\leq \gamma _{m,1,j}/r^{2}\) and \(f_{R_{n1j}}(r)\sim \gamma _{m,1,j}/r^{2}\) as \(r\to \infty \). Therefore, \(\mathbb{E} R_{n1j}^{\beta }< \infty \) for \(0 < \beta <1\) and \(\mathbb{E} R_{n1j}^{\beta }= \infty \) for \(\beta \geq 1\), respectively.

(II) Similarly, since \(f_{R_{n2j}}(r) \leq \gamma _{m,2,j}/r^{3}\) and \(f_{R_{n2j}}(r) \sim \gamma _{m,2,j}/r^{3}\) as \(r\to \infty \), then we have \(\mathbb{E} R_{n2j}^{\beta } < \infty \) for \(0 < \beta < 2\) and \(\mathbb{E} R_{n2j}^{\beta } = \infty \) for \(\beta \geq 2\), respectively.

(III) For any \(\lambda >0\),

$$\begin{aligned} L(\lambda x) = & \int _{1}^{\lambda x} r^{2} f_{R_{n2j}}(r)\,dr \\ \sim & \int _{1}^{\lambda x} r^{2} \frac{\gamma _{m,2,j}}{r^{3}}\,dr \quad \text{as } x \to \infty . \end{aligned}$$

It is not difficult to prove that \(L(\lambda x) \sim \gamma _{m,2,j} \log x \sim L(x)\) as \(x\to \infty \), implying that \(L(x)\) varies slowly at \(x\to \infty \), so does \(\widetilde{L}(x)\) as \(\widetilde{L}( \lambda x) \sim \gamma _{m,2,j}\log x \sim \widetilde{L}(x)\). □

Remark 2.1

With similar derivations, it can be obtained that the second moment of \(R_{nij}\) is finite for all \(3\leq i < j \leq m\). Therefore, a large number of classical limit properties of \(\{R_{nij}, n\geq 1\}\) for all \(3\leq i < j \leq m\) can be established easily. Hence, to study the limit behaviors for \(\{R_{nij}, n\geq 1\}\), we are only interested in \(\{R_{n1j}, n \geq 1\}\) and \(\{R_{n2j}, n \geq 1\}\).

3 Limit properties of \(R_{n1j}\)

According to part (I) of Theorem 2.1, \(\mathbb{E} R_{n1j} = \infty \). It follows that the classical strong law of large numbers for \(\{R_{n1j}, n \geq 1\}\) fails. Hence, we give the following weighted strong law of large numbers.

Theorem 3.1

Let \(\{a_{n}, n \geq 1\}\)and \(\{b_{n}, n \geq 1\}\)be two positive sequences satisfying the following conditions:

  1. (i)

    \(b_{n}\)is nondecreasing, \(b_{n}\to \infty \)as \(n \to \infty \), and \(\sum_{n=1}^{\infty } \frac{a_{n}}{b_{n}} < \infty \);

  2. (ii)

    \(\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \log (\frac{b_{n}}{a _{n}} ) \to \lambda < \infty \)as \(N \to \infty \).

Then we have

$$ \lim_{N\to \infty }\frac{1}{b_{N}}\sum _{n=1}^{N}a_{n}R_{n1j}= \gamma _{m,1,j} \lambda \quad \textit{almost surely}, $$

where \(\gamma _{m,1,j}\)is the same number as that in (2.1).

Proof

Let \(c_{n}=b_{n}/a_{n}\). Then \(c_{n} \to \infty \) follows from condition (i). Without loss of generality, we assume that \(c_{n} \ge 1\) for all \(n\ge 1\). Notice the following partition:

$$ \frac{1}{b_{N}}\sum_{n=1}^{N}a_{n}R_{n1j}= I_{N}+\mathit{II}_{N}+ \mathit{III}_{N}, $$
(3.1)

where

$$\begin{aligned} & I_{N} = \frac{1}{b_{N}}\sum_{n=1}^{N}a_{n} \bigl[R_{n1j} I(1 \leq R_{n1j} \leq c_{n} )- \mathbb{E} R_{n1j} I(1 \leq R_{n1j} \leq c_{n} ) \bigr], \\ &\mathit{II}_{N} = \frac{1}{b_{N}}\sum _{n=1}^{N}a_{n}R_{n1j} I(R_{n1j} > c _{n} ), \\ &\mathit{III}_{N} = \frac{1}{b_{N}}\sum _{n=1}^{N}a_{n}\mathbb{E} R_{n1j} I(1 \leq R_{n1j} \leq c_{n} ). \end{aligned}$$

Firstly, it can be established by condition (i) that

$$\begin{aligned} &\sum_{n=1}^{\infty }\frac{1}{c_{n}^{2}} \mathbb{E} R_{n1j}^{2} I(1 \leq R_{n1j} \leq c_{n} ) \\ &\quad = \sum_{n=1}^{\infty } \frac{\gamma _{m,1,j}}{c_{n}^{2}} \int _{1} ^{c_{n}}\frac{(r-1)^{j-2}}{r^{j-2}} \,dr \\ &\quad \leq C \sum_{n=1}^{\infty } \frac{1}{c_{n}^{2}} \int _{1}^{c_{n}} 1 \,dr \leq C \sum _{n=1}^{\infty }\frac{1}{c_{n}} \\ &\quad = C \sum_{n=1}^{\infty } \frac{a_{n}}{b_{n}} < \infty , \end{aligned}$$
(3.2)

and for any \(0<\varepsilon < 1\),

$$\begin{aligned} &\sum_{n=1}^{\infty }P\bigl(R_{n1j}I(R_{n1j} > c_{n} ) > \varepsilon \bigr) \\ &\quad = \sum_{n=1}^{\infty }P(R_{n1j} > c_{n}) \\ &\quad = \sum_{n=1}^{\infty } \gamma _{m,1,j} \int _{c_{n}}^{\infty }\frac{(r-1)^{j-2}}{r ^{j}} \,dr \\ &\quad \leq C \sum_{n=1}^{\infty } \int _{c_{n}}^{\infty } \frac{1}{r^{2}} \,dr \leq C \sum_{n=1}^{\infty }\frac{1}{c_{n}} = C \sum_{n=1}^{\infty } \frac{a _{n}}{b_{n}} < \infty . \end{aligned}$$
(3.3)

Consequently, according to (3.2), the Khintchine–Kolmogorov convergence theorem (see Theorem 1 on page 113 of Chow and Teicher [2]), and Kronecker’s lemma, it is known that \(I_{N} \to 0\) almost surely. Furthermore, by condition (i) and Kronecker’s lemma, we have

$$ \frac{1}{b_{N}}\sum_{n=1}^{N}a_{n} \to 0 \quad \text{as } N \to \infty . $$

Then it follows from (3.3) and the Borel–Cantelli lemma that \(\mathit{II}_{N} \to 0\) almost surely. Finally, since

$$\begin{aligned} &\mathbb{E} R_{n1j} I(1 \leq R_{n1j} \leq c_{n} ) \\ &\quad =\gamma _{m,1,j} \int _{1}^{c_{n}}\frac{(r-1)^{j-2}}{r^{j-1}} \,dr \\ &\quad = \gamma _{m,1,j} \sum_{l=0}^{j-2} C_{j-2}^{l}(-1)^{l} \int _{1}^{c _{n}} r^{-l-1} \,dr \\ &\quad \sim \gamma _{m,1,j} \log c_{n} \quad \text{as } n \to \infty , \end{aligned}$$

it is obtained by condition (ii) that \(\mathit{III}_{N} \to \gamma _{m,1,j} \lambda \) almost surely.

The proof is then completed. □

Remark 3.1

In particular, let \(j=2\) in Theorem 3.1, we have

$$ \lim_{N\to \infty }\frac{1}{b_{N}}\sum _{n=1}^{N}a_{n}R_{n12}= \lambda \quad \text{almost surely}. $$

In this case, the result is independent of the sample size \(m_{n}\) because the density of \(R_{n12}\) is independent of \(m_{n}\). Thus, there is no difference between the assumption \(m_{n} \to \infty \) and for fixed \(m_{n}=m\). Furthermore, if we take \(b_{n}=(\log n )^{\alpha +2}\) and \(a_{n}= (\log n)^{\alpha }/n\), then for \(\alpha >-2\), the conditions of Theorem 3.1 hold and \(\lambda = 1/(\alpha +2)\), demonstrating that Theorem 3.1 in Adler [1] and Theorem 2.3 in Miao et al. [7] are special cases of Theorem 3.1.

Remark 3.2

Except for \(b_{n}=(\log n )^{\alpha +2}\), \(a_{n}= (\log n)^{\alpha }/n\) with \(\alpha >-2\), there are some other sequences satisfying the conditions of Theorem 3.1, for example, (i) \(b_{n}=n^{\beta }\), \(a_{n}= 1\) for \(\beta >1\); (ii) \(b_{n}=n (\log n)^{\beta }\), \(a_{n}= 1\) for \(\beta >1\); and (iii) \(b_{n}=(\log n)^{2}(\log \log n)^{ \beta }\), \(a_{n}= (\log \log n)^{\beta }/n\) for all \(\beta \in R\). As a result, Theorem 2.18 in Miao et al. [7] is also a special case of Theorem 3.1.

To establish the complete convergence of the ratio \(R_{n1j}\), we first introduce the lemma obtained by Sung et al. [10].

Lemma 3.1

Let \(\{X_{ni},1 \leq i \leq k_{n} ,n \geq 1\}\)be an array of row-wise independent random variables and \(\{\mu _{n}, n \geq 1\}\)be a sequence of positive constants with \(\sum_{n=1}^{\infty } \mu _{n} = \infty \). Suppose that, for every \(\varepsilon >0\)and some \(\delta >0\),

  1. (i)

    \(\sum_{n=1}^{\infty } \mu _{n} \sum_{i=1}^{k_{n}} P(|X_{ni}| > \varepsilon ) < \infty \),

  2. (ii)

    there exists \(J \geq 2 \)such that

    $$ \sum_{n=1}^{\infty } \mu _{n} \Biggl(\sum_{i=1}^{k_{n}} \mathbb{E} X _{ni}^{2} I\bigl( \vert X_{ni} \vert \leq \delta \bigr) \Biggr)^{J} < \infty , $$
  3. (iii)

    \(\sum_{i=1}^{k_{n}} \mathbb{E} X_{ni} I(|X_{ni}| \leq \delta ) \to 0\)as \(n \to \infty \).

Then it is true that

$$ \sum_{n=1}^{\infty } \mu _{n} P \Biggl( \Biggl\vert \sum_{i=1}^{k_{n}} X_{ni} \Biggr\vert > \varepsilon \Biggr) < \infty \quad \textit{for all } \varepsilon >0. $$

Theorem 3.2

Let \(\{a_{n}, n \geq 1\}\), \(\{b_{n}, n \geq 1\}\), and \(\{\mu _{n}, n \geq 1\}\)be three positive sequences such that

  1. (i)

    \(b_{n}\)is nondecreasing, \(b_{n}\to \infty \)as \(n \to \infty \), and \(\sum_{n=1}^{\infty } \frac{a_{n}}{b_{n}} < \infty \);

  2. (ii)

    \(\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \log (\frac{b_{n}}{a _{n}} ) \to \lambda < \infty \)as \(N \to \infty \);

  3. (iii)

    \(\sum_{n=1}^{\infty } \mu _{n} = \infty \)and \(\sum_{N=1}^{ \infty } \mu _{N}\sum_{n=1}^{N} \frac{a_{n}}{b_{n}} < \infty \).

Then, for every \(\varepsilon >0\), we have

$$ \sum_{N=1}^{\infty } \mu _{N} P \Biggl( \Biggl\vert \sum_{n=1}^{N} \frac{a _{n}}{b_{N}}R_{n1j}- \gamma _{m,1,j} \lambda \Biggr\vert \geq \varepsilon \Biggr) < \infty , $$

where \(\gamma _{m,1,j}\)is the same number as that in (2.1).

Proof

Let \(c_{n}=b_{n}/a_{n}\). Then \(c_{n} \to \infty \) according to condition (i). Without loss of generality, we assume that \(c_{n} \ge 1\) for all \(n\ge 1\). Notice the following partition:

$$ \sum_{n=1}^{N} \frac{a_{n}}{b_{N}}R_{n1j}= I_{N}+\mathit{II}_{N}+ \mathit{III}_{N}, $$
(3.4)

where

$$\begin{aligned} &I_{N} = \sum_{n=1}^{N} \frac{a_{n}}{b_{N}}\bigl[R_{n1j} I(1 \leq R_{n1j} \leq c_{n} )- \mathbb{E} R_{n1j} I(1 \leq R_{n1j} \leq c_{n} ) \bigr], \\ &\mathit{II}_{N} = \sum_{n=1}^{N} \frac{a_{n}}{b_{N}}R_{n1j} I(R_{n1j} > c_{n} ), \\ &\mathit{III}_{N} = \sum_{n=1}^{N} \frac{a_{n}}{b_{N}}\mathbb{E} R_{n1j} I(1 \leq R_{n1j} \leq c_{n} ). \end{aligned}$$

According to condition (i), Markov’s inequality, and the following inequality

$$ \mathbb{E} R_{n1j}^{2} I(1 \leq R_{n1j} \leq c_{n} )=\gamma _{m,1,j} \int _{1}^{c_{n}}\frac{(r-1)^{j-2}}{r^{j-2}} \,dr \leq \gamma _{m,1,j}c _{n}, $$

we have, for any \(\varepsilon >0\), that

$$ P\bigl( \vert I_{N} \vert \geq \varepsilon \bigr) \leq \frac{1}{\varepsilon ^{2}} \sum_{n=1} ^{N}\frac{a_{n}^{2}}{b_{N}^{2}}\mathbb{E} R_{n1j}^{2} I(1 \leq R_{n1j} \leq c_{n} ) \leq C\sum _{n=1}^{N} \frac{a_{n}}{b_{n}}. $$
(3.5)

Moreover, with condition (ii) and the following estimation

$$\begin{aligned} &\mathbb{E} R_{n1j} I(1 \leq R_{n1j} \leq c_{n} ) \\ &\quad =\gamma _{m,1,j} \int _{1}^{c_{n}}\frac{(r-1)^{j-2}}{r^{j-1}} \,dr \\ &\quad = \gamma _{m,1,j}\sum_{l=0}^{j-2} C_{j-2}^{l} (-1)^{l} \int _{1}^{c _{n}} r^{-l-1} \,dr \\ &\quad \sim \gamma _{m,1,j} \log c_{n} \quad \text{as } n\to \infty , \end{aligned}$$

we obtain

$$ \mathit{III}_{N} \sim \gamma _{m,1,j}\sum _{n=1}^{N}\frac{a_{n} \log c _{n}}{b_{N}} = \gamma _{m,1,j} \lambda . $$

It follows from (3.5) and condition (iii) that

$$ \sum_{N=1}^{\infty }\mu _{N} P \bigl( \vert I_{N} + \mathit{III}_{N} - \gamma _{m,1,j} \lambda \vert \geq \varepsilon \bigr) < \infty . $$
(3.6)

Next, denote \(X_{Nn}=(a_{n}/b_{N} )R_{n1j} I(R_{n1j} > c_{n} )\). For any \(\varepsilon > 0\), we have

$$\begin{aligned} \sum_{n=1}^{N}P(X_{Nn} > \varepsilon ) \leq & \sum_{n=1}^{N}P \biggl(R _{n1j} > \max \biggl\{ c_{n}, \frac{b_{N} \varepsilon }{a_{n}} \biggr\} \biggr) \leq C \sum_{n=1}^{N} \int _{\xi }^{\infty } \frac{(r-1)^{j-2}}{r^{j}}\,dr \\ \leq & C \sum_{n=1}^{N} \int _{\xi }^{\infty } \frac{1}{r^{2}}\,dr \leq C \sum_{n=1}^{N} \frac{1}{\xi } \leq C \sum_{n=1}^{N} \frac{a _{n}}{b_{n}} , \end{aligned}$$

where \(\xi =\max \{ c_{n}, b_{N} \varepsilon /a_{n} \} \). It follows from condition (iii) that

$$ \sum_{N=1}^{\infty } \mu _{N}\sum_{n=1}^{N}P(X_{Nn} > \varepsilon ) \leq C \sum_{N=1}^{\infty } \mu _{N}\sum_{n=1}^{N} \frac{a_{n}}{b_{n}} < \infty . $$
(3.7)

Then, for any \(\delta >0 \), the following partial sum of truncated second moment holds:

$$ \sum_{n=1}^{N} \mathbb{E} X_{Nn}^{2} I(X_{Nn} \leq \delta ) \leq \sum _{n=1}^{N} \frac{a_{n}^{2}}{b_{N}^{2}} \int _{c_{n}}^{\frac{b_{N} \delta }{a_{n}}}\gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j-2}}\,dr\leq C \sum_{n=1}^{N} \frac{a_{n}}{b_{n}}, $$

which is bounded according to condition (i). This, combined with condition (iii), leads to the inequality that, for every \(J \geq 2\),

$$ \sum_{N=1}^{\infty } \mu _{N} \Biggl(\sum_{n=1}^{N} \mathbb{E} X_{Nn} ^{2} I(X_{Nn} \leq \delta ) \Biggr)^{J} \leq C\sum_{N=1}^{\infty } \mu _{N} \Biggl(\sum_{n=1}^{N} \frac{a_{n}}{b_{n}} \Biggr)^{J} \leq C \sum _{N=1}^{\infty } \mu _{N}\sum _{n=1}^{N} \frac{a_{n}}{b_{n}}< \infty . $$
(3.8)

Noting that (1) \(\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \to 0\) as \(N \to \infty \) by condition (i) and Kronecker’s lemma; and (2) \(\log (b_{N} \delta /b_{n} )\) is bounded by logC via taking \(\delta =C b_{1}/b_{N}\), we obtain, for any \(\delta >0\), that

$$\begin{aligned} &\sum_{n=1}^{N} \mathbb{E} X_{Nn} I(X_{Nn} \leq \delta ) \\ &\quad = \sum_{n=1}^{N} \frac{a_{n}}{b_{N}} \int _{c_{n}}^{\frac{b_{N} \delta }{a_{n}}} \gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j-1}}\,dr \\ &\quad \leq \frac{C}{b_{N}}\sum_{n=1}^{N} a_{n} \log \biggl(\frac{b_{N} \delta }{b_{n}} \biggr)\to 0 \quad \text{as } N \to \infty . \end{aligned}$$
(3.9)

Thus, from Lemma 3.1 and (3.7)–(3.9), we have

$$ \sum_{N=1}^{\infty } \mu _{N} P\bigl( \vert \mathit{II}_{N} \vert \geq \varepsilon \bigr) < \infty . $$
(3.10)

The combination of (3.4), (3.6), and (3.10) yields the desired result. □

Remark 3.3

As pointed out in Remark 3.2, the conditions of Theorem 3.2 are also easy to be satisfied. For example, it can be easily proved that the following sequences all satisfy the conditions of Theorem 3.2: (1) \(b_{n}=n^{\beta }\), \(a_{n}= 1\), \(\mu _{n}=1/n\), \(\beta >1\); (2) \(b_{n}=n (\log n)^{\beta }\), \(a_{n}= 1\), \(\mu _{n}=1/[n( \log )^{\delta }]\), \(\beta >1\), \(\delta >0\); (3) \(b_{n}=(\log n)^{2}( \log \log n)^{\beta }\), \(a_{n}= (\log \log n)^{\beta }/n\), \(\mu _{n}=1/[n( \log )^{\delta }]\), \(\beta \in R\), \(\delta >0\); and (4) \(b_{n}=( \log n )^{\alpha +2}\), \(a_{n}= (\log n)^{\alpha }/n\), \(\mu _{n}=1/[n( \log )^{\delta }]\), \(\alpha >-2\), \(\delta >0\). When the sequences are taken to be those in (4), Theorem 3.2 becomes Theorem 2.1 and Theorem 2.6 in Xu and Miao [11], respectively.

In what follows, we give the weighted weak law of large numbers.

Theorem 3.3

Let \(\{a_{n}, n \geq 1\}\)and \(\{b_{n}, n \geq 1\}\)be two positive sequences satisfying the following conditions:

  1. (i)

    \(\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \to 0\)as \(N \to \infty \);

  2. (ii)

    \(\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \log (\frac{b_{N}}{a _{n}} ) \to \tilde{\lambda } < \infty \)as \(N \to \infty \).

Then

$$ \frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} R_{n1j}\stackrel{P}{\longrightarrow } \gamma _{m,1,j} \tilde{\lambda } \quad \textit{as } N\to \infty , $$

where \(\gamma _{m,1,j}\)is the same number as that in (2.1).

Proof

We use the so-called weak law (see Theorem 1 on page 356 of Chow and Teicher [2]) for the proof of Theorem 3.3. From condition (i), we have the following two inequalities:

$$\begin{aligned} &\frac{1}{b_{N}^{2}}\sum_{n=1}^{N} a_{n}^{2} \mathbb{E} R_{n1j}^{2} I(1 \leq R_{n1j}\leq b_{N}/a_{n}) \\ &\quad = \frac{1}{b_{N}^{2}} \sum_{n=1}^{N} a_{n}^{2} \int _{1}^{\frac{b _{N}}{a_{n}}}\gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j-2}} \,dr \\ &\quad \leq \frac{C}{b_{N}^{2}}\sum_{n=1}^{N} a_{n}^{2} \int _{1}^{\frac{b _{N}}{a_{n}}} 1 \,dr \leq \frac{C}{b_{N}} \sum_{n=1}^{N} a_{n} \to 0 \end{aligned}$$

and, for any \(\varepsilon >0\),

$$\begin{aligned} &\sum_{n=1}^{N} P(a_{n}R_{n1j} > b_{N} \varepsilon ) \\ &\quad = \sum_{n=1}^{N} \int _{\frac{b_{N} \varepsilon }{a_{n}}}^{\infty } \gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j}} \,dr \\ &\quad \leq C \sum_{n=1}^{N} \int _{\frac{b_{N} \varepsilon }{a_{n}}}^{ \infty } \frac{1}{r^{2}} \,dr = \frac{C}{b_{N}}\sum_{n=1}^{N} a_{n} \to 0, \end{aligned}$$

which is sufficient to prove that the weighted sum converges to some real number.

Next, using condition (i) again, we have

$$\begin{aligned} &\mathbb{E} R_{n1j} I(1\leq R_{n1j}\leq b_{N}/a_{n}) \\ &\quad = \int _{1}^{\frac{b_{N}}{a_{n}}}\gamma _{m,1,j} \frac{(r-1)^{j-2}}{r ^{j-1}} \,dr \\ &\quad = \gamma _{m,1,j}\sum_{l=0}^{j-2} C_{j-2}^{l}(-1)^{l} \int _{1}^{\frac{b _{N}}{a_{n}}} r^{-l-1} \,dr \\ &\quad = \gamma _{m,1,j} \Biggl[\log b_{N} - \log a_{n} + \sum_{l=1}^{j-2} C _{j-2}^{l}(-1)^{l+1} \biggl( \frac{a_{n}^{l}}{l b_{N}^{l}} - \frac{1}{l} \biggr) \Biggr] \\ &\quad \sim \gamma _{m,1,j}\log \biggl(\frac{b_{N}}{a_{n}} \biggr) \quad \text{as } N \to \infty . \end{aligned}$$

It follows from condition (ii) that

$$\begin{aligned} &\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \mathbb{E} R_{n1j} I(1\leq R _{n1j}\leq b_{N}/a_{n}) \\ &\quad \sim \frac{\gamma _{m,1,j}}{b_{N}} \sum_{n=1}^{N} a_{n} \log \biggl(\frac{b _{N}}{a_{n}} \biggr) \to \gamma _{m,1,j} \tilde{\lambda } \quad \text{as } N \to \infty . \end{aligned}$$
(3.11)

The proof is then completed. □

In particular, let \(L(x)\) be a slowly varying function. If we take \(a_{n}= n^{\alpha } L(n)\) and \(b_{n}=n^{\alpha +1}L(n)\log n\), in which \(\alpha >-1/(j-1)\) and \(j\geq 2\), the conditions of Theorem 3.3 hold with \(\tilde{\lambda }=1/(\alpha +1)\). This result will be described as the following corollary, whose proof explains how we verify the conditions.

Corollary 3.1

Let \(L(x)\)be a slowly varying function. Then, for any \(\alpha >-1/(j-1)\), we have

$$ \frac{1}{N^{\alpha +1} L(N) \log N }\sum_{n=1}^{N} n^{\alpha } L(n) R _{n1j}\stackrel{P}{\longrightarrow } \frac{\gamma _{m,1,j}}{\alpha +1} \quad \textit{as } N\to \infty , $$

where \(\gamma _{m,1,j}\)is the same number as that in (2.1).

Proof

Denote \(U(x)=x^{\gamma } L(x)\) and \(U_{p}(x)=\int _{0}^{x}t^{p} U(t) \,dt\). \(U(x)=x^{\gamma } L(x)\) varies regularly with the exponent γ as \(L(x)\) is slowly varying. Therefore, according to part (b) of Theorem 1 on page 281 of Feller [3], for \(p \geq - \gamma -1\), it is true that

$$ \int _{0}^{x}t^{p} t^{\gamma } L(t) \,dt \sim \frac{1}{p+ \gamma +1}x ^{p+1}x^{\gamma } L(x) \quad \text{as } x \to \infty , $$

which implies that

$$ \sum_{n=1}^{N} n^{\alpha } L(n) \sim \frac{1}{\alpha +1} N^{\alpha +1 } L(N) \quad \text{for } \alpha >-1, \text{ as } N \to \infty . $$
(3.12)

It is easy to check that \(L(x)\log x\), \(L(x)\log L(x)\), and \(L(x)^{l}\) for all \(l\in N\) are slowly varying functions as \(L(x)\) is slowly varying. So, we derive the following similar estimates as (3.12):

$$\begin{aligned} & \sum_{n=1}^{N} n^{\alpha } L(n)\log n \sim \frac{1}{\alpha +1} N ^{\alpha +1 } L(N)\log N \quad \text{for } \alpha >-1; \\ & \sum_{n=1}^{N} n^{\alpha } L(n)\log L(n) \sim \frac{1}{\alpha +1} N ^{\alpha +1 } L(N)\log L(N) \quad \text{for } \alpha >-1; \\ & \sum_{n=1}^{N} n^{(l+1)\alpha } L(n)^{l+1} \sim \frac{1}{(l+1) \alpha +1} N^{(l+1)\alpha +1} L(N)^{l+1} \quad \text{for } \alpha >-\frac{1}{l+1}, 1 \leq l \leq m-2. \end{aligned}$$
(3.13)

Let \(a_{n}= n^{\alpha } L(n)\), \(b_{n}=n^{\alpha +1}L(n)\log n\). We once again use the so-called weak law, which can be found on page 356 of Chow and Teicher [2]. By (3.12), it is easy to obtain that, for any \(\varepsilon >0\),

$$\begin{aligned} &\sum_{n=1}^{N} P(a_{n}R_{n1j} > b_{N} \varepsilon ) \\ &\quad = \sum_{n=1}^{N} \int _{\frac{b_{N} \varepsilon }{a_{n}}}^{\infty } \gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j}} \,dr \\ &\quad \leq C \sum_{n=1}^{N} \int _{\frac{b_{N} \varepsilon }{a_{n}}}^{ \infty } \frac{1}{r^{2}} \,dr = \frac{C}{b_{N}}\sum_{n=1}^{N} a_{n} = \frac{C \sum_{n=1}^{N} n^{\alpha } L(n)}{N^{\alpha +1}\log N L(N)} \\ &\quad \sim \frac{C N^{\alpha + 1} L(N)}{N^{\alpha +1}\log N L(N)}=\frac{C}{ \log N} \to 0 \quad \text{as } N\to \infty \end{aligned}$$

and

$$\begin{aligned} &\frac{1}{b_{N}^{2}}\sum_{n=1}^{N} a_{n}^{2} \mathbb{E} R_{n1j}^{2} I(1 \leq R_{n1j}\leq b_{N}/a_{n}) \\ &\quad = \frac{1}{b_{N}^{2}} \sum_{n=1}^{N} a_{n}^{2} \int _{1}^{\frac{b _{N}}{a_{n}}}\gamma _{m,1,j} \frac{(r-1)^{j-2}}{r^{j-2}} \,dr \\ &\quad \leq \frac{C}{b_{N}^{2}}\sum_{n=1}^{N} a_{n}^{2} \int _{1}^{\frac{b _{N}}{a_{n}}}1 \,dr \leq \frac{C}{b_{N}} \sum_{n=1}^{N} a_{n} \to 0 \quad \text{as } N\to \infty , \end{aligned}$$

which are sufficient to prove that the weighted sum converges to some real number.

Next, notice that

$$\begin{aligned} &\mathbb{E} R_{n1j} I(1\leq R_{n1j}\leq b_{N}/a_{n}) \\ &\quad = \int _{1}^{\frac{b_{N}}{a_{n}}}\gamma _{m,1,j} \frac{(r-1)^{j-2}}{r ^{j-1}} \,dr \\ &\quad = \gamma _{m,1,j}\sum_{l=0}^{j-2} C_{j-2}^{l}(-1)^{l} \int _{1}^{\frac{b _{N}}{a_{n}}} r^{-l-1} \,dr \\ &\quad = \gamma _{m,1,j} \Biggl[\log b_{N} - \log a_{n} + \sum_{l=1}^{j-2} C _{j-2}^{l}(-1)^{l+1} \biggl( \frac{a_{n}^{l}}{l b_{N}^{l}} - \frac{1}{l} \biggr) \Biggr]. \end{aligned}$$

In addition, by (3.12) and (3.13), we have

$$\begin{aligned} &\frac{\log b_{N} \sum_{n=1}^{N} a_{n} }{b_{N}} \sim 1+\frac{\log L(N)+ \log \log N}{(\alpha +1)\log N}; \qquad \frac{\sum_{n=1}^{N} a_{n} \log a_{n}}{b_{N}} \sim \frac{\alpha \log N +\log L(N)}{(\alpha +1)\log N}; \\ &\frac{\sum_{n=1}^{N} a_{n}^{l+1}}{l b_{N}^{l+1}} \sim \frac{1}{l[(l+1) \alpha +1]N^{l} (\log N)^{l+1}}; \qquad \frac{ \sum_{n=1}^{N} a_{n}}{l b_{N}} \sim \frac{1}{l \log N}. \end{aligned}$$

Therefore

$$\begin{aligned} &\frac{1}{b_{N}}\sum_{n=1}^{N} a_{n} \mathbb{E} R_{n1j} I(1\leq R _{n1j}\leq b_{N}/a_{n}) \\ &\quad = \frac{\gamma _{m,1,j}}{b_{N}} \sum_{n=1}^{N} a_{n} \Biggl[\log b _{N} - \log a_{n} + \sum_{l=1}^{j-2} C_{j-2}^{l}(-1)^{l+1} \biggl(\frac{a _{n}^{l}}{l b_{N}^{l}} -\frac{1}{l} \biggr) \Biggr] \\ &\quad = \gamma _{m,1,j} \Biggl[\frac{ \log b_{N} \sum_{n=1}^{N} a_{n}}{b _{N}} - \frac{\sum_{n=1}^{N} a_{n} \log a_{n}}{b_{N}}\\ &\qquad {} + \sum_{l=1} ^{j-2} C_{j-2}^{l}(-1)^{l+1} \biggl( \frac{\sum_{n=1}^{N} a_{n}^{l+1}}{l b_{N}^{l+1}} -\frac{ \sum_{n=1}^{N} a_{n}}{l b_{N}} \biggr) \Biggr] \\ &\quad \sim \gamma _{m,1,j} \frac{\log N +\log \log N}{(\alpha +1) \log N} \to \frac{\gamma _{m,1,j}}{\alpha +1} \quad \text{as } N\to \infty , \end{aligned}$$

where \(\alpha >-1/(j-1)\). The proof is then completed. □

Remark 3.4

It is easy to check that the corresponding strong law of numbers of Theorem 3.3 fails. Hence, the weak law of large numbers is optimal in this case. Corollary 3.1 extends Theorem 3.2 in Adler [1], which proved the same result for \(R_{n12}\) of the sample from the uniform distribution \(U(0, p)\).

4 Limit properties of \(R_{n2j}\)

The following Marcinkiewicz–Zygmund type law of large numbers extends the result for \(R_{n23}\) in Xu and Miao [11, Theorem 2.3].

Theorem 4.1

For any \(\delta \in (0,2)\), we have

$$ \frac{1}{N^{1/\delta }}\sum_{n=1}^{N}(R_{n2j} - c) \to 0 \quad \textit{almost surely} $$

for some finite constantc, wherectakes the value \(\mathbb{E} R _{n2j}\)for \(\delta \in [1,2)\)andcis arbitrary for \(\delta \in (0,1)\). In particular, the Kolmogorov type strong law of large numbers holds when \(\delta =1\).

Proof

By part (II) of Theorem 2.1 and the Marcinkiewicz–Zygmund theorem (see Theorem 2 on page 125 in Chow and Teicher[2]), the theorem can be proved. □

The following central limit theorem and its almost sure version extend the responding result for \(R_{n23}\) established by Miao et al. [7, Theorem 2.12] and Xu and Miao [11, Theorem 2.4], respectively.

Theorem 4.2

Let \(L(x)=\mathbb{E} R^{2}_{n2j} I(|R_{n2j}| \leq x)\). Then

$$ \frac{1}{\eta _{N}} \sum_{n=1}^{N} (R_{n2j}-\mathbb{E} R_{n2j} ) \xrightarrow{d} \varPhi (x) \quad \textit{as } N \to \infty , $$

where \(\eta _{N} = 1 \lor \sup \{x >0; N L(x) \geq x^{2}\}\)and \(\varPhi (x)\)denotes the standard normal distribution function.

Proof

From Theorem 4.17 of Kallenberg [6, page 73], it is known that \(L(x)=\mathbb{E} R^{2}_{n2j} I(|R_{n2j}| \leq x)\) varies slowly at \(x\to \infty \). Therefore, the theorem is proved. □

Theorem 4.3

Let \(L(x)=\mathbb{E} R^{2}_{n2j} I(|R_{n2j}| \leq x)\). Then, for any real numberx, we have

$$ \lim_{N \to \infty }\frac{1}{\log N}\sum _{n=1}^{N} \frac{1}{n} I \biggl( \frac{S_{n}}{\eta _{n}} \leq x \biggr)= \varPhi (x) \quad \textit{almost surely}, $$

where \(S_{n}=\sum_{i=1}^{n} Y_{i} \), \(Y_{n}= R_{n2j}-\mathbb{E} R_{n2j}\), \(\eta _{n} = 1 \lor \sup \{x >0; n L(x) \geq x^{2}\}\)and \(\varPhi (x)\)denotes the standard normal distribution function.

The proof of Theorem 4.3 is very similar to that of Xu and Miao [11, Theorem 2.4], so we omit it. For the same reason, the following large deviation principle, which extends the corresponding result built by Xu and Miao [11, Theorem 2.5], is given without proof.

Theorem 4.4

For any \(x>1/2\),

$$ \lim_{N \to \infty }\frac{1}{\log N}\log P \Biggl(\sum _{n=1}^{N} R_{n2j} > N^{x} \Biggr)=-2x+1. $$

As aforementioned, the βth moment of \(R_{n2j}\) does not exist for any \(\beta \geq 2\). Thus, some other classical limit properties, such as the law of iterated logarithm, do not hold. As is well known, the limit theorems for self-normalized sum usually require much less stringent moment conditions than the classical limit theorems. Therefore, we are interested in the investigation of limit behaviors for the self-normalized sum \(S_{N}/V_{N}\), where \(S_{N}=\sum_{n=1}^{N} (R _{n2j}-\mathbb{E} R_{n2j})\) and \(V_{N}^{2}=\sum_{n=1}^{N} (R_{n2j}- \mathbb{E} R_{n2j})^{2}\). Notice that \(\{R_{n2j}-\mathbb{E} R_{n2j}, n \geq 1 \}\) is a sequence of i.i.d. random variables with mean zero. In addition, the distribution of \(R_{n2j}-\mathbb{E} R_{n2j}\) is in the domain of attraction of the normal law, because \(\widetilde{L}(x)= \mathbb{E}(R_{n2j}-\mathbb{E} R_{n2j})^{2} I(|R_{n2j}-\mathbb{E} R _{n2j}|\leq x)\) is slowly varying as \(x\to \infty \). Thus many self-normalized limit properties can be directly established as the corollaries of the corresponding well-known results. We list some of them without the proofs.

From Theorem 3.3 in Gine et al. [4], we obtain the following self-normalized central limit theorem.

Theorem 4.5

\(S_{N}/V_{N}\xrightarrow{d} \varPhi (x)\)as \(N \to \infty \).

The following self-normalized almost sure central limit theorem can be obtained by Corollary 1 in Zhang [12].

Theorem 4.6

Denote \(d_{n}=\exp \{(\log n)^{\alpha }\}/n\)and \(D_{N}=\sum_{n=1} ^{N} d_{n}\)for \(1\leq \alpha < \frac{1}{2}\). Then, for any realx, we have

$$ \lim_{N \to \infty }\frac{1}{D_{N}}\sum _{n=1}^{N} d_{n} I \biggl( \frac{S _{n}}{V_{n}} \leq x \biggr)= \varPhi (x) \quad a.s. $$

From Theorem 3 in Robinson and Wang [8], we have the following self-normalized Berry–Esseen bounds.

Theorem 4.7

Let \(\eta _{N}=\sup \{x:Nx^{-2} \mathbb{E} R_{n2j}^{2} I(|R_{n2j}| \leq x) \geq 1\}\). Then there exists \(0<\eta <1\)such that

$$ \biggl\vert P \biggl(\frac{S_{N}}{V_{N}}\leq x \biggr) -\varPhi (x) \biggr\vert \leq A\delta _{N} \exp \biggl\{ -\frac{\eta x^{2}}{2} \biggr\} $$

for all \(x\in R\)and \(N\geq 1\), where

$$ \delta _{N}=N P\bigl( \vert R_{n2j} \vert > \eta _{N}\bigr)+N \eta _{N}^{-1} \bigl\vert \mathbb{E} R_{n2j} I\bigl( \vert R_{n2j} \vert \leq \eta _{N}\bigr) \bigr\vert + N \eta _{N}^{-3} \mathbb{E} \vert R_{n2j} \vert ^{3} I\bigl( \vert R _{n2j} \vert \leq \eta _{N}\bigr) $$

andAis an absolute constant.

From Theorem 3.1 in Shao [9], we obtain the following self-normalized moderate deviation principle.

Theorem 4.8

Let \(\{x_{N}, N\geq 1\}\) be a sequence of positive numbers satisfying

$$ x_{N} \to \infty \quad \textit{and} \quad \frac{x_{N}}{\sqrt{N}} \to 0 \quad \textit{as } N\to \infty . $$

Then

$$ \lim_{N\to \infty }\frac{1}{x_{N}^{2}}\log P \biggl( \frac{S_{N}}{V_{N}} \geq x_{N} \biggr)=-\frac{1}{2}. $$

The following self-normalized law of the iterated logarithm can be established by Theorem 3.1 in Griffin and Kuelbs [5].

Theorem 4.9

It is true that

$$ \limsup_{N\to \infty }\frac{S_{N}}{V_{N} \sqrt{2\log \log N}}=1 \quad \textit{almost surely}. $$

References

  1. Adler, A.: Laws of large numbers for ratios of uniform random variables. Open Math. 13(1), 571–576 (2015)

    Article  MathSciNet  Google Scholar 

  2. Chow, Y.S., Teicher, H.: Probability Theory: Independence, Interchangeability, Martingales, 3rd edn. Springer, New York (1997)

    Book  Google Scholar 

  3. Feller, W.: An Introduction to Probability Theory and Its Applications, 3rd edn. Wiley, New York (1971)

    MATH  Google Scholar 

  4. Giné, E., Götze, F., Mason, D.M.: When is the Student t-statistic asymptotically standard normal? Ann. Probab. 25(3), 1514–1531 (1997)

    Article  MathSciNet  Google Scholar 

  5. Griffin, P.S., Kuelbs, J.D.: Self-normalized laws of the iterated logarithm. Ann. Probab. 17(4), 1571–1601 (1989)

    Article  MathSciNet  Google Scholar 

  6. Kallenberg, O.: Foundations of Modern Probability. Springer, New York (1997)

    MATH  Google Scholar 

  7. Miao, Y., Sun, Y., Wang, R., Dong, M.: Various limit theorems for ratios from the uniform distribution. Open Math. 14(1), 393–403 (2016)

    Article  MathSciNet  Google Scholar 

  8. Robinson, J., Wang, Q.: On the self-normalized Cramér-type large deviation. J. Theor. Probab. 18(4), 891–909 (2005)

    Article  Google Scholar 

  9. Shao, Q.M.: Self-normalized large deviations. Ann. Probab. 25(1), 285–328 (1997)

    Article  MathSciNet  Google Scholar 

  10. Sung, S.H., Volodin, A.I., Hu, T.C.: More on complete convergence for arrays. Stat. Probab. Lett. 71(4), 303–311 (2005)

    Article  MathSciNet  Google Scholar 

  11. Xu, S.F., Miao, Y.: Some limit theorems for ratios of order statistics from uniform random variables. J. Inequal. Appl. 2017, 295, 1–18 (2017)

    Article  MathSciNet  Google Scholar 

  12. Zhang, Y.: A general result on almost sure central limit theorem for self-normalized sums for mixing sequences. Lith. Math. J. 53(4), 471–483 (2013)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors greatly appreciate both the editor and the referees for their valuable comments and some helpful suggestions that improved the clarity and readability of this paper.

Funding

This work was supported by the National Natural Science Foundation of China (11871056, 11471104).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shoufang Xu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, S., Mei, C. & Miao, Y. Limit theorems for ratios of order statistics from uniform distributions. J Inequal Appl 2019, 303 (2019). https://doi.org/10.1186/s13660-019-2256-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2256-7

MSC

Keywords