Open Access

Limit properties of exceedance point processes of strongly dependent normal sequences

Journal of Inequalities and Applications20152015:63

https://doi.org/10.1186/s13660-015-0585-8

Received: 28 September 2014

Accepted: 2 February 2015

Published: 20 February 2015

Abstract

In this paper, we define an in plane Cox process and prove the time-normalized point process of exceedances by a dependent normal sequence converging to the Cox process in distribution under some mild conditions. As some applications of the convergence result, two important joint asymptotic distributions for the order statistics are derived.

Keywords

Cox processexceedance point processstrongly dependent normal sequences kth maxima

MSC

60F0562E20

1 Introduction

Let \(\{\xi_{i}, i\geq1\}\) be a standardized normal sequence with correlation coefficients \(r_{ij}=\operatorname{Cov}(\xi_{i},\xi_{j})\) and \(M_{n}^{(k)}\) be the kth largest maxima of \(\{\xi_{i},1\leq i\leq n\}\). A conventional assumption is that \(r_{ij}\rightarrow0\) as \(j-i\rightarrow+\infty\) at different rates according to which dependent normal sequences are classified into two different types: ‘weakly dependent’ and ‘strongly dependent’, respectively. Leadbetter et al. [1] considered the case: \(r_{ij}=r_{|j-i|}\) and \(r_{n}\log n\rightarrow0\) as \(n\rightarrow+\infty\), i.e. \(\{\xi_{i}, i\geq1\}\) is a weakly dependent stationary normal sequence. By using asymptotic independence, they focused on \(M_{n}^{(k)}\) and its location (which is written as \(L_{n}^{(k)}\)) and obtained the asymptotic behavior of the probabilities \(P(a_{n}(M_{n}^{(2)}-b_{n})\leq x, L_{n}^{(2)}/n\leq t)\), and \(P(a_{n}(M_{n}^{(1)}-b_{n})\leq x_{1}, a_{n}(M_{n}^{(2)}-b_{n})\leq x_{2})\); here and in the sequel the standardized constants \(a_{n}\) and \(b_{n}\) are defined as
$$ a_{n}=(2\log n)^{1/2},\qquad b_{n}=a_{n}-(2a_{n})^{-1}( \log\log n+\log4\pi). $$
(1.1)
Mittal and Ylvisaker [2] showed that if \(r_{ij}=r_{|j-i|}\) and \(r_{n}\log n\rightarrow\gamma>0\) as \(n\rightarrow+\infty\) (the strongly dependent stationary case) then \(a_{n}(M_{n}^{(1)}-b_{n})\) tends in distribution to a convolution of \(\exp(-e^{-x})\) and a normal distribution function, and further if \(r_{n}\log n\rightarrow\infty\) then by a different normalization, the limiting distribution is normal. Recently, several important results for extremes of dependent normal sequences were established. Ho and Hsing [3] and Tan and Peng [4] investigated the joint asymptotic distributions of the maximum of \(\{\xi_{i}, 1\leq i\leq n\}\) and \(\sum^{n}_{i=1}\xi_{i}\) for dependent Gaussian sequences. Hashorva et al. [5] considered the joint limit distributions of maxima of complete and incomplete samples, respectively, i.e. the Piterbarg theorem under some conditions on convergence rate of the correlations. Leadbetter et al. [1] developed an important tool: the weak convergence of exceedance point processes which is crucial to study the joint asymptotic distributions of some extremes. Many authors further studied the asymptotic behavior of exceedance point processes under different conditions. We refer to Piterbarg [6], Hu et al. [7], Falk et al. [8], Peng et al. [9] and Hashorva et al. [10] for point processes of exceedances by weakly dependent stationary sequences including Gaussian ones and Wiśniewski [11], Lin et al. [12] for point processes of exceedances by strongly dependent Gaussian vector sequences.

Throughout this paper, let \(\{\xi_{i}, i\geq1\}\) be a standardized strongly dependent stationary normal sequence with correlation coefficients \(r_{ij}=\operatorname{Cov}(\xi_{i},\xi_{j})\). C stands for a constant which may vary from line to line and ‘→’ for the convergence as \(n\rightarrow\infty\). The remainder of the paper is organized as follows. In Section 2, we define an in plane Cox process and prove that the time-normalized point process \(N_{n}\) of exceedances of levels \(u^{(1)}_{n}, u^{(2)}_{n} ,\ldots, u^{(r)}_{n}\) by \(\{\xi_{i},1\leq i\leq n\}\) converges in distribution to the in plane Cox process. In Section 3, as the applications of our main result, the asymptotic results of the probabilities \(P(a_{n}(M_{n}^{(2)}-b_{n})\leq x, L_{n}^{(2)}/n\leq t)\), and \(P(a_{n}(M_{n}^{(1)}-b_{n})\leq x_{1}, a_{n}(M_{n}^{(2)}-b_{n})\leq x_{2})\) are established.

2 Convergence of point processes of exceedances

Let \(\{\xi_{i}, i\geq1\}\) be a standardized normal sequence with correlation coefficients \(r_{ij}=\operatorname{Cov}(\xi_{i},\xi_{j})\) satisfying the following assumptions:
$$ r_{ij}=r_{|j-i|} \quad \mbox{and}\quad r_{n}\log n\rightarrow \gamma\in(0,\infty) \quad \mbox{as } n \rightarrow+\infty. $$
(2.1)
We concentrate on deriving the convergence of the time-normalized exceedance point process \(N_{n}\) of the levels \(u^{(1)}_{n}, u^{(2)}_{n},\ldots,u^{(r)}_{n}\) by \(\{\xi_{i},1\leq i\leq n\}\) where \(u^{(k)}_{n}=x_{k}/a_{n}+b_{n}\), \(k=1,2,\ldots,r\). In order to prove the main result in this section, we shall use the famous Berman’s inequality which is based on the early work of Slepian [13], Berman [14] and is polished up in Li and Shao [15]. The latest results related to Berman’s inequality are Hashorva and Weng [16] and Lu and Wang [17]. The former gave a detailed introduction to Berman’s inequality and derived the inequality for some general scaling random variable and thus obtained a Berman inequality for non-normal random vector. The upper bound of Berman’s inequality gives an estimate of the difference between two standardized n-dimensional distribution functions by a convenient function of their covariances. According to Hashorva and Weng [16], some results for normal sequences may be extended to non-normal cases. The following lemmas are also needed in the proof of our result.

Lemma 2.1

Let \(d>0\) and \(\gamma\geq0\) be constants, put \(\rho_{n}=\gamma/\log n\) and suppose that \(r_{n}\log n\rightarrow\gamma\) as \(n\rightarrow\infty\). Then, for any sequence \(\{u_{n}\}\) such that \(n(1-\Phi(u_{n}))\) is bounded, we have
$$ nd\sum_{k=1}^{[nd]}|r_{k}- \rho_{n}|\exp \biggl(-\frac {u_{n}^{2}}{1+w_{k}} \biggr)\rightarrow0\quad \textit{as } n\rightarrow+\infty, $$
where \(w_{k}=\max\{|r_{k}|,\rho_{n}\}\).

Proof

The proof can be found on p.134 in Leadbetter et al. [1]. □

Leadbetter et al. [1] defined a Cox process N with intensity \(\exp(-x-\gamma+\sqrt{2\gamma}\zeta)\), where ζ is a standard normal random variable, i.e. the process has the distribution determined by the following probability:
$$\begin{aligned} P \Biggl(\bigcap^{k}_{i=1} \bigl\{ N(B_{i})=k_{i}\bigr\} \Biggr) =&\int ^{\infty}_{-\infty }\prod^{k}_{i=1} \biggl(\frac{(m(B_{i})\exp(-x-\gamma+\sqrt{2\gamma}z))^{k_{i}}}{k_{i}!} \\ & \cdot \exp\bigl(-m(B_{i})e^{-x-\gamma+\sqrt{2\gamma}z}\bigr) \biggr)\phi(z)\, dz, \end{aligned}$$
(2.2)
where \(m(\cdot)\) is the Lebesgue measure and proved that a point process \(N_{n}\) of time-normalized exceedances converges in distribution to N on \((0,+\infty)\) which is summarized as Lemma 2.2 below.

Lemma 2.2

Suppose \(\{\xi_{i},i\geq1\}\) is a standard stationary normal sequence with covariances satisfying (2.1). Then the point process \(N_{n}\) of time-normalized exceedances of the level \(u_{n}(u_{n}=x/a_{n}+b_{n})\) converges in distribution to N on \((0,+\infty)\), where N is the Cox process defined by (2.2).

Proof

The proof can be found on p.136 in Leadbetter et al. [1]. □

In Theorem 2.1 below, we extend Lemma 2.2 to the case of exceedances of several levels and study a vector of point processes \(N_{n}=(N_{n}^{(1)},N_{n}^{(2)},\ldots,N_{n}^{(r)})\) which arises when \(\{\xi_{i},1\leq i\leq n\}\) exceeds the levels \(u_{n}^{(1)},u_{n}^{(2)},\ldots,u_{n}^{(r)}\), where \(u_{n}^{(k)}=x_{k}/a_{n}+b_{n}\), \(1\leq k\leq r\). For clarity, we record the locations of \(u_{n}^{(1)},u_{n}^{(2)},\ldots,u_{n}^{(r)}\) along fixed horizontal lines \(L_{1},L_{2},\ldots,L_{r}\) in the plane. The structure of the process vector is the same as that of the exceedances process on pp.111-112 in Leadbetter et al. [1] where the authors presented a detailed and visualized introduction. According to Lemma 2.2, each one-dimensional point process, on a given \(L_{k}\), i.e. \(N_{n}^{(k)}\) converges to a Cox process in distribution under appropriate conditions. Before presenting Theorem 2.1, we first give two definitions, one of which concerns a two-dimension Cox process, i.e. an in plane Cox process.

Definition 2.1

The locations of order statistics are the places where order statistics appear in the index set, for example the location of the maxima of \(\{\xi_{i},1\leq i\leq n\}\) varying among \(1,\ldots,n\).

Definition 2.2

Let \(\{\sigma_{1j},j=1,2,\ldots\}\) be the points of a Cox process \(N^{(k)}\) on \(L_{r}\) with (stochastic) intensity \(\exp(-x_{r}-\gamma+\sqrt{2\gamma}\zeta)\), where ζ is a standard normal random variable, i.e. \(N^{(k)}\) has the distribution characterized in (2.2). Let \(\beta_{j}\), \(j=1,2,\ldots\) be independent and identically distributed (i.i.d.) random variables, independent also of the Cox process on \(L_{r}\), taking values \(1,2,\ldots,r\) with conditional probabilities
$$P(\beta_{j}=s|\zeta=z)=\left \{ \begin{array}{l@{\quad}l} (\tau_{r-s+1}-\tau_{r-s})/\tau_{r},&\text{for }s=1,2,\ldots,r-1, \\ \tau_{1}/\tau_{r},&\text{for }s=r, \end{array} \right . $$
i.e. \(P(\beta_{j}\geq s|\zeta=z)=\tau_{r-s+1}/\tau_{r}\) for \(s=1,2,\ldots,r\) where \(\tau_{i}=e^{-x_{i}-\gamma+\sqrt{2\gamma}z}\), \(i=1, 2,\ldots,r\). For each j, place points \(\sigma_{2j}, \sigma_{3j},\ldots, \sigma_{\beta_{j}j}\) on \(\beta_{j}-1\) lines \(L_{r-1}, L_{r-2},\ldots, L_{r-\beta_{j}+1}\), vertically above \(\sigma_{1j}\), we can obtain an in plane Cox process N. Obviously the probability that a point appears \(L_{r-1}\) above \(\sigma_{1j}\) is \(P(\beta_{j}\geq2|\zeta=z)=\tau_{r-1}/\tau_{r}\) and the deletions are conditionally independent, so that \(N^{(r-1)}\) is obtained as a conditionally independent thinning of the Cox process \(N^{(r)}\).

The structure of the in plane Cox process N is very similar to that of the Poisson process on p.112 in Leadbetter et al. [1], but the independent thinning is replaced with the conditionally independent thinning here.

Theorem 2.1

Suppose \(\{\xi_{i},i\geq1\}\) is a standardized normal sequence satisfying the conditions in Lemma  2.2. Let \(u^{(k)}_{n}=x_{k}/a_{n}+b_{n}\) satisfy \(u^{(1)}_{n}\geq u^{(2)}_{n}\geq\cdots\geq u^{(r)}_{n}\) (\(1\leq k\leq r\)) where \(a_{n}\) and \(b_{n}\) are defined in (1.1). Then the time-normalized point process \(N_{n}\) of exceedances of levels \(u^{(1)}_{n}, u^{(2)}_{n} ,\ldots, u^{(r)}_{n}\) by \(\{\xi_{i},1\leq i\leq n\}\) converges in distribution to the above-mentioned in plane Cox process.

Proof

It is sufficient to show that when n goes to ∞:
  1. (a)

    \(E(N_{n}(B))\rightarrow E(N(B))\) for all sets B of the form \((c,d]\times(r,\delta]\), \(r<\delta\), \(0< c<d\), where \(d\leq1\) and \(E(\cdot)\) is the expectation,

     
  2. (b)

    \(P(N_{n}(B)=0)\rightarrow P(N(B)=0)\) for all sets B which are finite unions of disjoint sets of this form.

     
Focus on (a) firstly. If \(B=(c,d]\times(r,\delta]\) intersects any of the lines, suppose these are \(L_{s},L_{s+1},\ldots,L_{t}\) (\(1\leq s\leq t\leq r\)). Then
$$N_{n}(B)=\sum^{t}_{k=s}N_{n}^{(k)}\bigl((c,d]\bigr), \qquad N(B)=\sum^{t}_{k=s}N^{(k)}\bigl((c,d]\bigr) $$
and the number of points \(j/n\) in \((c,d]\) is \([nd]-[nc]\). As in the proof of Theorem 5.5.1 on p.113 in Leadbetter et al. [1], we have \(E(N_{n}(B))=([nd]-[nc])\sum^{t}_{k=s}(1-F(u_{n}^{(k)}))\), where
$$1-F\bigl(u_{n}^{(k)}\bigr)=1-\Phi\bigl(u_{n}^{(k)} \bigr),\quad 1\leq j\leq n. $$
Obviously
$$ n\bigl(1-\Phi\bigl(u_{n}^{(k)}\bigr)\bigr)=n \bigl(1-\Phi(x_{k}/a_{n}+b_{n})\bigr)\sim e^{-x_{k}}\quad \mbox{as } n\rightarrow\infty. $$
(2.3)
Thus, we have \(E(N_{n}(B))\sim n(d-c)\sum^{t}_{k=s}(\frac{e^{-x_{k}}}{n}+o(\frac{1}{n}))\rightarrow (d-c)\sum^{t}_{k=s}e^{-x_{k}}\). Since
$$\begin{aligned} E\bigl(N(B)\bigr) =&\sum^{t}_{k=s}E \bigl((d-c)\exp(-x_{k}-\gamma+\sqrt{2\gamma}\zeta )\bigr) \\ =&\sum^{t}_{k=s}(d-c)e^{-x_{k}-\gamma} \cdot e^{\frac{(\sqrt{2\gamma})^{2}}{2}} \\ =&\sum^{t}_{k=s}(d-c)e^{-x_{k}}, \end{aligned}$$
(a) follows. In order to prove (b), we must show that \(P(N_{n}(B)=0)\rightarrow P(N(B)=0)\), where \(B=\bigcup^{m}_{1}C_{k}\) with disjoint \(C_{k}=(c_{k}, d_{k}]\times(r_{k}, s_{k}]\). It is convenient to discard any set \(C_{k}\) which does not intersect any of the lines \(L_{1}, L_{2}, \ldots, L_{r}\). Because there are intersections and differences of the intervals \((c_{k}, d_{k}]\), we may write B in the form \(\bigcup^{s}_{k=1}(c_{k}, d_{k}]\times E_{k}\), where \((c_{k}, d_{k}]\) are disjoint and \(E_{k}\) is a finite union of semi-closed intervals. It therefore follows that
$$ \bigl\{ N_{n}(B)=0\bigr\} =\bigcap ^{s}_{k=1}\bigl\{ N_{n}(F_{k})=0 \bigr\} , $$
(2.4)
where \(F_{k}=(c_{k},d_{k}]\times E_{k}\). Denote the lowest \(L_{j}\) intersecting \(F_{k}\) by \(L_{l_{k}}\). By the above thinning property, obviously
$$ \bigl\{ N_{n}(F_{k})=0\bigr\} =\bigl\{ N_{n}^{(l_{k})}\bigl((c_{k},d_{k}]\bigr)=0\bigr\} = \bigl\{ M_{n}(c_{k},d_{k})\leq u_{n}^{(l_{k})}\bigr\} , $$
(2.5)
where \(M_{n}(c_{k},d_{k})\) stands for the maximum of \(\{\eta_{i}, i\geq1\}\) with index k (\([cn]< k\leq[dn]\)). Consider the probabilities of (2.4) and (2.5) and obtain
$$ P\bigl(N_{n}(B)=0\bigr)=P \Biggl(\bigcap ^{s}_{k=1}\bigl\{ M_{n}(c_{k},d_{k}) \leq u_{n}^{(l_{k})}\bigr\} \Biggr). $$
(2.6)
It is convenient to firstly prove the following result. Let \(\{\bar{\xi}_{i}, i\geq1\}\) be a standardized normal sequence with the correlation coefficient ρ. \(M_{n}(c,d;\rho)\) stands for the maximum of \(\{\bar{\xi}_{k}\}\) with index k (\([cn]< k\leq[dn]\)). It is well known that \(M_{n}(c_{1}, d_{1}; \rho),\ldots,M_{n}(c_{k}, d_{k}; \rho)\) have the same distribution as \((1-\rho)^{1/2}M_{n}(c_{1}, d_{1}; 0)+\rho^{1/2}\zeta,\ldots, (1-\rho)^{1/2}M_{n}(c_{k}, d_{k}; 0)+\rho^{1/2}\zeta\), where \(c=c_{1}< d_{1}<\cdots<c_{k}<d_{k}=d\) and ζ is a standard normal variable; see Leadbetter et al. [1]. Next we must estimate the bound of
$$ \Biggl\vert P \Biggl(\bigcap^{s}_{k=1} \bigl\{ M_{n}(c_{k},d_{k})\leq u_{n}^{(l_{k})} \bigr\} \Biggr)-P \Biggl(\bigcap^{s}_{k=1} \bigl\{ M_{n}(c_{k},d_{k},\rho _{n})\leq u_{n}^{(l_{k})}\bigr\} \Biggr)\Biggr\vert , $$
(2.7)
where \(\rho_{n}=\gamma/\log n\).
Using Berman’s inequality, the bound of (2.7) does not exceed
$$ \frac{1}{2\pi}\sum|r_{ij}- \rho_{n}|\bigl(1-\rho_{n}^{2}\bigr)^{-1/2} \exp \biggl(-\frac{\frac {1}{2}((u_{n}^{(i)})^{2}+(u_{n}^{(j)})^{2})}{1+\omega_{ij}} \biggr), $$
(2.8)
where the sum is carried out over \(i< j\) and \(i,j\in\bigcup^{s}_{k=1}([c_{k}n],[d_{k}n]]\), \(u_{n}^{(i)}\) or \(u_{n}^{(j)}\) stands for \(x_{i}/a_{n}+b_{n}\) or \(x_{j}/a_{n}+b_{n}\), and \(\omega_{ij}=\max\{|r_{ij}|,\rho_{n}\}\). Furthermore, (2.8) does not exceed
$$\begin{aligned}& C \sum_{1\leq i< j\leq n}|r_{ij}- \rho_{n}| \exp \biggl(-\frac{\frac {1}{2}((x_{i}/a_{n}+b_{n})^{2}+(x_{j}/a_{n}+b_{n})^{2})}{1+\omega _{ij}} \biggr) \\& \quad < C n\sum_{k=1}^{n}|r_{k}- \rho_{n}|\exp \biggl(-\frac{((\min_{1\leq i\leq n}x_{i})/a_{n}+b_{n})^{2}}{1+\omega_{k}} \biggr) \\ & \quad \rightarrow0. \end{aligned}$$
Noting \(n(1-\Phi((\min_{1\leq i\leq n}x_{i})/a_{n}+b_{n}))\) is bounded, the last ‘→’ attributes to Lemma 2.1. So it suffices to prove
$$ P \Biggl(\bigcap^{s}_{k=1}\bigl\{ M_{n}(c_{k},d_{k},\rho_{n})\leq u_{n}^{(l_{k})}\bigr\} \Biggr)\rightarrow P \Biggl(\bigcap ^{s}_{k=1}\bigl\{ N(B)=0\bigr\} \Biggr). $$
Noting the definition of \(M_{n}(c_{k}, d_{k}, \rho_{n})\), clearly, it follows that
$$\begin{aligned}& P \Biggl(\bigcap^{s}_{k=1}\bigl\{ M_{n}(c_{k},d_{k},\rho_{n}) \leq u_{n}^{(l_{k})}\bigr\} \Biggr) \\& \quad =P \Biggl(\bigcap ^{s}_{k=1}\bigl\{ (1-\rho_{n})^{\frac {1}{2}}M_{n}(c_{k},d_{k},0) +\rho_{n}^{\frac{1}{2}}\zeta\leq u_{n}^{(l_{k})}\bigr\} \Biggr) \\& \quad =\int^{+\infty}_{-\infty}P \Biggl(\bigcap ^{s}_{k=1}\bigl\{ M_{n}(c_{k},d_{k},0) \leq (1-\rho_{n})^{-\frac{1}{2}}\bigl(u_{n}^{(l_{k})}- \rho_{n}^{\frac{1}{2}}z\bigr)\bigr\} \Biggr)\phi(z)\, dz, \end{aligned}$$
where the proof of the last ‘=’ can be completed by using the argument on the first line from the bottom on p.136 in Leadbetter et al. [1]. Since \(a_{n}=(2\log n)^{\frac{1}{2}}\), \(b_{n}=a_{n}+O(a_{n}^{-1}\log\log n)\), and \(\rho_{n}=\gamma/\log n\), it is easy to show
$$ (1-\rho_{n})^{-\frac{1}{2}}\bigl(u_{n}^{(l_{k})}- \rho_{n}^{\frac{1}{2}}z\bigr) =\frac{x_{l_{k}}+\gamma-\sqrt{2\gamma}z}{a_{n}}+b_{n}+o \bigl(a_{n}^{-1}\bigr), $$
see also the proof of Theorem 6.5.1 on p.137 in Leadbetter et al. [1]. Furthermore, we may obtain the following result:
$$\begin{aligned}& P \Biggl(\bigcap^{s}_{k=1}\bigl\{ M_{n}(c_{k},d_{k},0) \leq (1- \rho_{n})^{-\frac{1}{2}}\bigl(u_{n}^{(l_{k})}- \rho_{n}^{\frac{1}{2}}z\bigr)\bigr\} \Biggr) \\& \quad =P \Biggl(\bigcap^{s}_{k=1}\bigl\{ \tilde{\zeta}_{[c_{k}n]+1}\leq (1-\rho_{n})^{-\frac{1}{2}} \bigl(u_{n}^{(l_{k})}-\rho_{n}^{\frac{1}{2}}z\bigr), \ldots, \\& \qquad \tilde{\zeta}_{[d_{k}n]}\leq(1-\rho_{n})^{-\frac {1}{2}} \bigl(u_{n}^{(l_{k})}-\rho_{n}^{\frac{1}{2}}z\bigr)\bigr\} \Biggr) \\& \quad \rightarrow\prod^{s}_{k=1}\exp \bigl(-(d_{k}-c_{k})e^{-x_{l_{k}}-\gamma+\sqrt {2\gamma}z}\bigr), \end{aligned}$$
where \(\tilde{\zeta}_{k}\) is a sequence of independent standard normal variables and we used the same arguments as (2.3) for the last step. Using the dominated convergence theorem, it follows that
$$\begin{aligned}& \int^{+\infty}_{-\infty}P \Biggl(\bigcap ^{s}_{k=1}\bigl\{ M_{n}(c_{k},d_{k},0) \leq (1-\rho_{n})^{-\frac{1}{2}}\bigl(u_{n}^{(l_{k})}- \rho_{n}^{\frac{1}{2}}z\bigr)\bigr\} \Biggr)\phi(z)\, dz \\& \quad \rightarrow\int^{+\infty}_{-\infty}\prod ^{s}_{k=1}\exp \bigl(-(d_{k}-c_{k})e^{-x_{l_{k}}-\gamma+\sqrt{2\gamma}z} \bigr)\phi(z)\, dz \\& \quad =P\bigl(N(B)=0\bigr). \end{aligned}$$
The proof of (b) is completed. □

Corollary 2.1

Suppose \(\{\xi_{i}, i\geq1\}\) satisfies the conditions of Theorem  2.1. Let \(B_{1},\ldots,B_{s}\) be Borel subsets of the unit interval, whose boundaries have zero Lebesgue measure. Then for integers \(m^{(k)}_{j}\),
$$\begin{aligned}& P\bigl(N^{(k)}_{n}(B_{j})=m_{j}^{(k)},j=1,2, \ldots,s; k=1,2,\ldots,r\bigr) \\& \quad \rightarrow P\bigl(N^{(k)}(B_{j})=m_{j}^{(k)},j=1,2, \ldots,s; k=1,2,\ldots,r\bigr). \end{aligned}$$

Proof

Combining Theorem 2.1 and the proof of Corollary 5.5.2 in Leadbetter et al. [1], we can complete the proof. □

Theorem 2.2

Let the levels \(u^{(k)}_{n}\) (\(1\leq k\leq r\)) satisfy
$$ P \Bigl(\max_{1\leq i\leq n}\xi_{i}\leq u_{n}^{(k)} \Bigr)\rightarrow \int^{+\infty}_{-\infty} \exp\bigl(-e^{-x_{k}-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz,\quad n\rightarrow\infty, $$
with \(u^{(1)}_{n}\geq u^{(2)}_{n}\geq\cdots\geq u^{(r)}_{n}\). Let \(S^{(k)}_{n}\) be the number of exceedances of \(u^{(k)}_{n}\) by \(\{\xi_{i},1\leq i\leq n\}\). Then for \(k_{1}\geq0,k_{2}\geq0,\ldots, k_{r}\geq0\),
$$\begin{aligned}& P\bigl(S^{(1)}_{n}=k_{1},S^{(2)}_{n}=k_{1}+k_{2}, \ldots ,S^{(r)}_{n}=k_{1}+k_{2}+ \cdots+k_{r}\bigr) \\& \quad \rightarrow \frac{\tau_{1}^{k_{1}}(\tau_{2}-\tau_{1})^{k_{2}}\cdots(\tau_{r}-\tau _{r-1})^{k_{r}}}{k_{1}!k_{2}!\cdots k_{r}!} \\& \qquad {}\cdot\int^{+\infty}_{-\infty}\bigl(\exp(\sqrt{2 \gamma}z-\gamma )\bigr)^{k_{1}+k_{2}+\cdots+k_{r}}\cdot \exp\bigl(-e^{-x_{k}-\gamma+\sqrt{2\gamma}z}\bigr) \phi(z)\, dz. \end{aligned}$$
(2.9)

Proof

By Corollary 2.1, the left-hand side of (2.9) converges to
$$ P\bigl(S^{(1)}=k_{1},S^{(2)}=k_{1}+k_{2}, \ldots,S^{(r)}=k_{1}+k_{2}+\cdots+k_{r} \bigr), $$
(2.10)
where \(S^{(i)}=N^{(i)}([0, 1])\). In our paper, the definition of the Cox process is similar to that of the in plane Poisson process in Leadbetter et al. [1]. So we can refer to the proof of Theorem 5.6.1 in Leadbetter et al. [1] and find that (2.10) equals
$$\begin{aligned}& \frac{(k_{1}+k_{2}+\cdots+k_{r})!}{k_{1}!k_{2}!\cdots k_{r}!} \biggl(\frac{\tau_{1}}{\tau_{r}} \biggr)^{k_{1}} \biggl(\frac{\tau _{2}-\tau_{1}}{\tau_{r}} \biggr)^{k_{2}} \cdots \biggl(\frac{\tau_{r}-\tau_{r-1}}{\tau_{r}} \biggr)^{k_{r}} \\& \quad {}\cdot P\bigl(N^{(r)}\bigl((0,1]\bigr)=k_{1}+k_{2}+ \cdots+k_{r}\bigr). \end{aligned}$$
The proof is completed since
$$\begin{aligned}& P\bigl(N^{(r)}\bigl((0,1]\bigr)=k_{1}+k_{2}+ \cdots+k_{r}\bigr) \\& \quad =\int^{+\infty}_{-\infty}\frac{(\exp(-x_{r}-\gamma+\sqrt{2\gamma }z))^{k_{1}+k_{2}+\cdots+k_{r}}}{ (k_{1}+k_{2}+\cdots+k_{r})!}\cdot\exp \bigl(-e^{-x_{r}-\gamma+\sqrt{2\gamma }z}\bigr)\phi(z)\, dz \\& \quad =\frac{(\exp(-x_{r}))^{k_{1}+k_{2}+\cdots+k_{r}}}{ (k_{1}+k_{2}+\cdots+k_{r})!}\int^{+\infty}_{-\infty}\bigl( \exp(-\gamma+\sqrt {2\gamma}z)\bigr)^{k_{1}+k_{2}+\cdots+k_{r}} \\& \qquad {}\cdot\exp\bigl(-e^{-x_{r}-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz. \end{aligned}$$
 □

3 The joint distributions of some order statistics

This section contains two important results which concerns the joint distributions of order statistics of \(\{\xi_{i},i\geq1\}\).

Theorem 3.1

Suppose \(\{\xi_{i}, i\geq1\}\) is a standard normal sequence satisfying the conditions of Theorem  2.1. Let \(u^{(k)}_{n}=x_{k}/a_{n}+b_{n}\) and \(M_{n}^{(2)}\), \(L_{n}^{(2)}\) be the second largest maxima of \(\xi_{1}, \xi_{2},\ldots,\xi_{n}\) and its location. Then for \(x_{1}>x_{2}\), as \(n\rightarrow\infty\),
$$\begin{aligned}& P\bigl(a_{n}\bigl(M_{n}^{(1)}-b_{n} \bigr)\leq x_{1},a_{n}\bigl(M_{n}^{(2)}-b_{n} \bigr)\leq x_{2}\bigr) \\& \quad \rightarrow\int^{+\infty}_{-\infty} \bigl( \exp(-x_{2}-\gamma+\sqrt{2\gamma}z)-\exp(-x_{1}-\gamma+ \sqrt{2\gamma }z)+1\bigr) \\& \qquad {}\cdot\exp\bigl(-e^{-x_{2}-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz \end{aligned}$$
(3.1)
and
$$ P \biggl(a_{n}\bigl(M_{n}^{(2)}-b_{n} \bigr)\leq x, \frac{1}{n}L_{n}^{(2)}\leq t \biggr) \rightarrow \int^{x}_{-\infty}H(y,t)\, dy, $$
(3.2)
where
$$\begin{aligned} H(y,t) =&\int^{+\infty}_{-\infty}(1-t)\exp(-y- \gamma+\sqrt{2\gamma}z) \exp\bigl(-(1-t)e^{-y-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz \\ &{} \cdot\int^{+\infty}_{-\infty}t\exp(-y-\gamma+\sqrt{2 \gamma}z) \exp\bigl(-te^{-y-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz \\ &{} +\int^{+\infty}_{-\infty}\exp\bigl(-(1-t)e^{-y-\gamma+\sqrt{2\gamma}z} \bigr)\phi (z)\, dz \\ &{} \cdot \int^{+\infty}_{-\infty}t^{2} \exp(-2y-2\gamma+2\sqrt{2\gamma}z) \exp\bigl(-te^{-y-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz. \end{aligned}$$

Proof

Clearly the left-hand side of (3.1) is equal to
$$\begin{aligned}& P\bigl(a_{n}\bigl(M_{n}^{(1)}-b_{n} \bigr)\leq x_{1},a_{n}\bigl(M_{n}^{(2)}-b_{n} \bigr)\leq x_{2}\bigr) \\& \quad =P\bigl(S_{n}^{(2)}=0\bigr)+ P\bigl(S_{n}^{(1)}=0, S_{n}^{(2)}=1\bigr), \end{aligned}$$
where \(S^{(i)}_{n}\) is the number of exceedances of \(u^{(i)}_{n}\) by \(\eta_{1}, \eta_{2}, \ldots, \eta_{n}\). Using Theorem 2.2 in the special case yields (3.1). In order to prove (3.2), write I and J for intervals \(\{1, 2,\ldots, [nt]\}\) and \(\{[nt] + 1,\ldots, n\}\), respectively. \(M^{(1)}(I)\), \(M^{(2)}(I)\), \(M^{(1)}(J)\), \(M^{(2)}(J)\) stand for the maxima and second largest maxima of \(\xi_{1}, \xi_{2},\ldots, \xi_{n}\) in the intervals I, J, respectively. Let \(H_{n}(x_{1}, x_{2}, x_{3}, x_{4})\) be the joint d.f. of the normalized r.v.
$$\begin{aligned}& X_{n}^{(1)}=a_{n}\bigl(M_{n}^{(1)}(I)-b_{n} \bigr),\qquad X_{n}^{(2)}=a_{n}\bigl(M_{n}^{(2)}(I)-b_{n} \bigr), \\& Y_{n}^{(1)}=a_{n}\bigl(M_{n}^{(1)}(J)-b_{n} \bigr),\qquad Y_{n}^{(2)}=a_{n}\bigl(M_{n}^{(2)}(J)-b_{n} \bigr). \end{aligned}$$
Generally let \(x_{1} > x_{2}\) and \(x_{3} > x_{4}\), that is,
$$\begin{aligned}& H_{n}(x_{1},x_{2},x_{3},x_{4}) \\& \quad = P\bigl(M_{n}^{(1)}(I)\leq u_{n}^{(1)},M_{n}^{(2)}(I) \leq u_{n}^{(2)},M_{n}^{(1)}(J)\leq u_{n}^{(3)},M_{n}^{(2)}(J)\leq u_{n}^{(4)}\bigr) \\& \quad = P\bigl(N_{n}^{(1)}\bigl(I' \bigr)=0,N_{n}^{(2)}\bigl(I'\bigr) \leq1,N_{n}^{(3)}\bigl(J'\bigr)=0, N_{n}^{(4)}\bigl(J'\bigr)\leq1\bigr), \end{aligned}$$
where \(I'=(0,t]\) and \(J'=(t,1]\). By using Corollary 2.1 with \(B_{1}=I'\) and \(B_{2} = J'\), we obtain
$$\begin{aligned}& \lim_{n\rightarrow\infty}H_{n}(x_{1},x_{2},x_{3},x_{4}) \\& \quad =\lim_{n\rightarrow\infty}P\bigl(N_{n}^{(1)} \bigl(I'\bigr)=0,N_{n}^{(2)}\bigl(I' \bigr)\leq1\bigr)\cdot P\bigl(N_{n}^{(3)}\bigl(J' \bigr)=0, N_{n}^{(4)}\bigl(J'\bigr)\leq1\bigr) \\& \quad =\int^{+\infty}_{-\infty}\bigl(\bigl(e^{-x_{2}}-e^{-x_{1}} \bigr)t\exp(\sqrt{2\gamma }-\gamma)+1\bigr) \exp\bigl(-te^{-x_{2}-\gamma+\sqrt{2\gamma}z}\bigr) \phi(z)\, dz \\& \qquad {}\cdot \int^{+\infty}_{-\infty}\bigl( \bigl(e^{-x_{4}}-e^{-x_{3}}\bigr) (1-t)\exp(\sqrt{2\gamma }-\gamma)+1 \bigr) \exp\bigl(-(1-t)e^{-x_{4}-\gamma+\sqrt{2\gamma}z}\bigr)\phi(z)\, dz \\& \quad =H_{t}(x_{1},x_{2})H_{1-t}(x_{3},x_{4})=H(x_{1},x_{2},x_{3},x_{4}). \end{aligned}$$
Now the left-hand side of (3.2) is equal to
$$\begin{aligned}& P\bigl(M^{(2)}_{n}(I)\leq u^{(2)}_{n},M^{(2)}_{n}(I) \geq M^{(1)}_{n}(J)\bigr) \\& \quad {}+P\bigl(M^{(1)}_{n}(I)\leq u^{(2)}_{n}, M^{(1)}_{n}(J)>M^{(1)}_{n}(I)\geq M^{(2)}_{n}(J)\bigr). \end{aligned}$$
(3.3)
Obviously H is absolutely continuous and the boundaries of sets in \(R^{4}\) such as \(\{(w_{1},w_{2},w_{3}, w_{4}): w_{2}\leq x_{2},w_{2}>w_{3}\}\) and \(\{(w_{1},w_{2},w_{3},w_{4}) : w_{1}\leq x_{2},w_{3} > w_{1}\geq w_{4}\}\) have zero Lebesgue measure. Thus using Corollary 2.1, it follows that (3.3) converges to
$$P(X_{2}\leq x_{2},X_{2}\geq Y_{1})+P(X_{1}\leq x_{2},Y_{1}>X_{1} \geq Y_{2}). $$
According to the joint distribution \(H(x_{1}, x_{2}, x_{3}, x_{4})\) of \(X_{1}\), \(X_{2}\), \(Y_{1}\), and \(Y_{2}\), a simple evaluation completes the proof. □

Remark 3.1

We may obtain the joint asymptotic distribution of \(M^{(1)}_{n} ,M^{(2)}_{n} , \ldots, M^{(k)}_{n}\) by using the same method as in Theorem 3.1.

Declarations

Acknowledgements

The research of the second author was supported by the National Natural Science Foundation of China, Grant No. 71171166. The research of other authors was supported by the Scientific Research Fund of Sichuan Provincial Education Department under Grant 12ZB082, the Scientific research cultivation project of Sichuan University of Science and Engineering under Grant 2013PY07, the Scientific Research Fund of Sichuan University of Science and Engineering under Grant 2013KY03, and the Science Research Programs for Doctors in Southwestern University of Finance and Economics, Grant No. JBK1207085.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
School of Statistics, Southwestern University of Finance and Economics
(2)
School of Science, Sichuan University of Science and Engineering

References

  1. Leadbetter, MR, Lindgren, G, Rootzen, H: Extremes and Related Properties of Sequences and Processes. Springer, New York (1983) View ArticleMATHGoogle Scholar
  2. Mittal, Y, Ylvisaker, D: Limit distributions for the maxima of stationary Gaussian processes. Stoch. Process. Appl. 3, 1-18 (1975) View ArticleMATHMathSciNetGoogle Scholar
  3. Ho, HC, Hsing, T: On the asymptotic joint distribution of the sum and maximum of stationary normal random variables. J. Appl. Probab. 33, 138-145 (1996) View ArticleMATHMathSciNetGoogle Scholar
  4. Tan, Z, Peng, Z: Joint asymptotic distribution of exceedances point process and partial sum of stationary Gaussian sequence. Appl. Math. J. Chin. Univ. Ser. B 26, 319-326 (2011) View ArticleMATHMathSciNetGoogle Scholar
  5. Hashorva, E, Peng, Z, Weng, Z: On Piterbarg theorem for maxima of stationary Gaussian sequences. Lith. Math. J. 53, 280-292 (2013) View ArticleMATHMathSciNetGoogle Scholar
  6. Piterbarg, VI: Asymptotic Methods in the Theory of Gaussian Processes and Fields. Translations of Mathematical Monographs, vol. 148. Am. Math. Soc., Providence (1996) MATHGoogle Scholar
  7. Hu, A, Peng, Z, Qi, Y: Limit laws for maxima of contracted stationary Gaussian sequences. Metrika 70, 279-295 (2009) View ArticleMathSciNetGoogle Scholar
  8. Falk, M, Hüsler, J, Reiss, RD: Laws of Small Numbers: Extremes and Rare Events, 3rd edn. DMV Seminar, vol. 23. Birkhäuser, Basel (2010) Google Scholar
  9. Peng, Z, Tong, J, Weng, Z: Joint limit distributions of exceedances point processes and partial sums of Gaussian vector sequence. Acta Math. Sin. Engl. Ser. 28, 1647-1662 (2012) View ArticleMATHMathSciNetGoogle Scholar
  10. Hashorva, E, Peng, Z, Weng, Z: Limit properties of exceedances point processes of scaled stationary Gaussian sequences. Probab. Math. Stat. 34, 45-59 (2014) MATHMathSciNetGoogle Scholar
  11. Wiśniewski, M: Multidimensional point processes of extreme order statistics. Demonstr. Math. 27, 475-485 (1994) MATHGoogle Scholar
  12. Lin, F, Shi, D, Jiang, Y: Some distributional limit theorems for the maxima of Gaussian vector sequences. Comput. Math. Appl. 64, 2497-2506 (2012) View ArticleMATHMathSciNetGoogle Scholar
  13. Slepian, D: The one-sided barrier problem for Gaussian noise. Bell Syst. Tech. J. 41, 463-501 (1962) View ArticleMathSciNetGoogle Scholar
  14. Berman, SM: Limit theorems for the maximum term in stationary sequences. Ann. Math. Stat. 35, 502-516 (1964) View ArticleMATHGoogle Scholar
  15. Li, WV, Shao, QM: A normal comparison inequality and its applications. Probab. Theory Relat. Fields 122, 494-508 (2002) View ArticleMATHMathSciNetGoogle Scholar
  16. Hashorva, E, Weng, Z: Berman’s inequality under random scaling. Stat. Interface 7, 339-349 (2014) View ArticleMathSciNetGoogle Scholar
  17. Lu, D, Wang, X: Some new normal comparison inequalities related to Gordon’s inequality. Stat. Probab. Lett. 88, 133-140 (2014) View ArticleMATHGoogle Scholar

Copyright

© Lin et al.; licensee Springer. 2015