By applying Corollary 2.1, we could establish the distributional tail representation for the LGMD.
Theorem 3.1
Under the conditions of Theorem
2.1, we have
$$1-F_{k}(x)=c(x)\exp \biggl(- \int^{x}_{e}\frac{g(t)}{f(t)} \, \mathrm {dt} \biggr) $$
for large enough
x, where
$$c(x)=\frac{1}{2^{k/2}\sigma^{1/k}\Gamma(1+k/2)}\exp \bigl(-1/\bigl(2\sigma^{2}\bigr) \bigr) \bigl(1+\theta_{1}(x)\bigr) $$
and
$$f(t)=\frac{\sigma^{2}}{k}t(\log t)^{1-2k},\qquad g(t)=1- \frac{\sigma ^{2}}{k}(\log t)^{-2k}, $$
where
\(\theta_{1}(x)\rightarrow0\)
as
\(x\rightarrow\infty\).
Proof
For large enough x, by Corollary 2.1, we have
$$\begin{aligned} 1-F_{k}(x) =&\frac{\sigma^{2}}{k}(\log x)^{1-2k}xf_{k}(x) \bigl(1+\theta_{1}(x)\bigr) \\ =&\frac{1}{2^{k/2}\sigma^{1/k}\Gamma(1+k/2)}\exp \biggl(\log \log x-\frac{(\log x)^{2k}}{2\sigma^{2}} \biggr) \bigl(1+ \theta_{1}(x)\bigr) \\ =&\frac{1}{2^{k/2}\sigma^{1/k}\Gamma(1+k/2)}\exp \biggl(-\frac{1}{2\sigma^{2}} \biggr)\exp \biggl(- \int^{x}_{e} \biggl(\frac {k(\log t)^{2k-1}}{\sigma^{2}t}- \frac{1}{t\log t} \biggr) \, \mathrm{dt} \biggr) \\ &{}\times \bigl(1+\theta_{1}(x) \bigr) \\ =&\frac{1}{2^{k/2}\sigma^{1/k}\Gamma(1+k/2)}\exp \biggl(-\frac{1}{2\sigma^{2}} \biggr)\exp \biggl(- \int^{x}_{e}\frac {1-k^{-1}\sigma^{2}(\log t)^{-2k}}{k^{-1}\sigma^{2}t(\log t)^{1-2k}}\, \mathrm{dt} \biggr) \\ &{}\times\bigl(1+\theta_{1}(x)\bigr) \\ =&c(x)\exp \biggl(- \int^{x}_{e}\frac{g(t)}{f(t)}\, \mathrm {dt} \biggr), \end{aligned}$$
where \(\theta_{1}(x)\rightarrow0\) as \(x\rightarrow\infty\). The desired result follows. □
Remark 3.1
As \(\lim_{t\rightarrow\infty}g(t)=1\), \(f(t)>0\) on \([1,\infty)\) is absolutely continuous function and \(\lim_{t\rightarrow\infty}f'(t)=0\) in Theorem 3.1, an application of Theorem 3.1 and Corollary 1.7 in Resnick [13] shows \(F_{k}\in D(\Lambda)\), and the norming constants \(a_{n}\) and \(b_{n}\) can be chosen by
$$ \frac{1}{1-F_{k}(b_{n})}=n, \qquad a_{n}=f(b_{n}) $$
(3.1)
such that
$$ \lim_{n\rightarrow\infty}F^{n}_{k}(a_{n}x+b_{n})= \Lambda(x), $$
(3.2)
where \(D(\Lambda)\) denotes the domain of attraction \(\Lambda(x)=\exp(-\exp(-x))\).
Here we establish the asymptotic distribution of the normalized maximum of a sequence of i.i.d. random variables following LGMD. Remark 2.2 and Theorem 3.1 showed that the distribution of partial maximum converges to \(\Lambda(x)\). So, the following task is to look for the associated suitable norming constants.
Theorem 3.2
Suppose that
\(\{X_{n},n\geq1\}\)
be an i.i.d. sequence from the LGMD with
\(k>1/2\). Let
\(M_{n}=\max\{X_{1},X_{2},\ldots,X_{n}\}\). We have
$$\lim_{n\rightarrow\infty}P(M_{n}\leq\alpha_{n}x+ \beta_{n})=\exp\bigl(-\exp(-x)\bigr), $$
where
$$ \alpha_{n} =\frac{\sigma^{2}\exp (2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)} ) (1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}}(\log\log n-(k^{2}-1)\log2-2k\log\Gamma(1+k/2)) )}{ k (2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)}+\log (1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}}(\log\log n-(k^{2}-1)\log2-2k\log\Gamma(1+k/2)) ) )^{2k-1}} $$
and
$$\begin{aligned} \beta_{n} =&\exp \bigl(2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)} \bigr) \biggl(1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}} \bigl(\log\log n- \bigl(k^{2}-1\bigr)\log2 \\ &{}-2k\log\Gamma(1+k/2) \bigr) \biggr). \end{aligned}$$
Proof
Since \(F_{k}\in D(\Lambda)\), there must be norming constants \(a_{n}>0\) and \(b_{n}\in\mathbb{R}\) which make sure that \(\lim_{n\rightarrow\infty}P((M_{n}-b_{n})/a_{n}\leq x)=\exp(-\exp(-x))\). By Proposition 1.1 in Resnick [13] and Theorem 3.1, we can make choice of the norming constants \(a_{n}\) and \(b_{n}\) satisfying the equations: \(b_{n}=(1/(1-F_{k}))^{\leftarrow}(n)\) and \(a_{n}=f(b_{n})\). Note that \(F_{k}(x)\) is continuous, then \(1-F_{k}(b_{n})=n^{-1}\). By Corollary 2.1, we have
$$nk^{-1}\sigma^{2}(\log b_{n})^{1-2k}b_{n}f_{k}(b_{n}) \rightarrow1, $$
as \(n\rightarrow\infty\), viz.,
$$n2^{-\frac{k}{2}}\sigma^{-\frac{1}{k}}\Gamma^{-1} \biggl(1+ \frac {k}{2} \biggr)\log b_{n}\exp \biggl(-\frac{(\log b_{n})^{2k}}{2\sigma ^{2}} \biggr)\rightarrow1, $$
as \(n\rightarrow\infty\), and so
$$ \log n-\frac{k}{2}\log2-\frac{1}{k}\log\sigma-\log \Gamma \biggl(1+\frac{k}{2} \biggr)+\log\log b_{n}- \frac{(\log b_{n})^{2k}}{2\sigma^{2}}\rightarrow0, $$
(3.3)
as \(n\rightarrow\infty\), from which one deduces
$$\frac{(\log b_{n})^{2k}}{2\sigma^{2}\log n}\rightarrow1, $$
as \(n\rightarrow\infty\), thus
$$2k\log\log b_{n}-\log2-2\log\sigma-\log\log n\rightarrow0, $$
as \(n\rightarrow\infty\), hence
$$\log\log b_{n}=\frac{1}{2k}(\log2+2\log\sigma+\log\log n)+o(1). $$
Putting the equality above into (3.3), we have
$$(\log b_{n})^{2k}=2\sigma^{2} \biggl(\log n+ \frac{1}{2k}\log\log n-\frac {k^{2}-1}{2k}\log2-\log\Gamma \biggl(1+ \frac{k}{2} \biggr) \biggr)+o(1), $$
from which one induces that
$$\log b_{n}=2^{\frac{1}{2k}}\sigma^{\frac{1}{k}}(\log n)^{\frac {1}{2k}} \biggl(1+\frac{\log\log n-(k^{2}-1)\log2-2k\log\Gamma (1+\frac{k}{2})}{2^{2}k^{2}\log n}+o \bigl((\log n)^{-1} \bigr) \biggr), $$
therefore
$$\begin{aligned} b_{n} =&\exp \bigl(2^{\frac{1}{2k}}\sigma^{\frac{1}{k}}(\log n)^{\frac{1}{2k}} \bigr) \biggl(1+\frac{\sigma^{\frac{1}{k}}(\log n)^{\frac{1}{2k}-1}}{2^{2-\frac{1}{2k}}k^{2}} \biggl(\log\log n- \bigl(k^{2}-1\bigr)\log2 \\ &{}-2k\log\Gamma\biggl(1+\frac{k}{2}\biggr) \biggr)+o \bigl((\log n)^{\frac{1}{2k}-1} \bigr) \biggr) \\ =&\beta_{n}+o \bigl((\log n)^{\frac{1}{2k}-1}\exp \bigl(2^{\frac{1}{2k}} \sigma^{\frac {1}{k}}(\log n)^{\frac{1}{2k}} \bigr) \bigr), \end{aligned}$$
where
$$\begin{aligned} \beta_{n} =&\exp \bigl(2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)} \bigr) \biggl(1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}}\bigl(\log\log n- \bigl(k^{2}-1\bigr)\log2 \\ &{}-2k\log\Gamma(1+k/2)\bigr) \biggr). \end{aligned}$$
Hence, we have
$$\begin{aligned} \alpha_{n}&=f(\beta_{n}) \\ &=\frac{\sigma^{2}\exp(2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)}) (1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}}(\log\log n-(k^{2}-1)\log2-2k\log\Gamma(1+k/2)) )}{ k (2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)}+\log (1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}}(\log\log n-(k^{2}-1)\log2-2k\log\Gamma(1+k/2)) ) )^{2k-1}}. \end{aligned}$$
It is easy to check that \(\lim_{n\rightarrow\infty}\alpha_{n}/a_{n}=1\) and \(\lim_{n\rightarrow\infty}(b_{n}-\beta_{n})/\alpha_{n}=0\). Hence, by Theorem 1.2.3 in Leadbetter et al. [16], the proof is complete. □
Remark 3.2
Theorem 3.2 shows that the limit distribution of the normalized maximum from the logarithmic Maxwell distribution is the extreme value distribution \(\exp(-\exp(-x))\) with norming constants
$$\alpha_{n}=\frac{\sigma^{2}\exp (2^{1/2}\sigma(\log n)^{1/2} ) (1+\frac{\sigma}{2^{3/2}(\log n)^{1/2}} (\log\log n-2\log(\pi^{1/2}/2) ) )}{ 2^{1/2}\sigma(\log n)^{1/2}+\log (1+\frac{\sigma }{2^{3/2}(\log n)^{1/2}} (\log\log n-2\log(\pi^{1/2}/2) ) )} $$
and
$$\beta_{n}=\exp \bigl(2^{1/2}\sigma(\log n)^{1/2} \bigr) \biggl(1+\frac {\sigma}{2^{3/2}(\log n)^{1/2}} \bigl(\log\log n-2\log\bigl(\pi ^{1/2}/2\bigr) \bigr) \biggr). $$
At the end of this section, we generalize the result of Theorem 3.2 to the situation of a finite blending of LGMDs.
Finite mixture distributions (or models) have been widely applied in various areas such as Chemistry [17] and image and video databases [18]. Specifically, related extreme statistical scholars have studied them. Mladenović [19] have considered extreme values of the sequences of independent random variables with common mixed distributions containing normal, Cauchy and uniform distributions. Peng et al. [20] have investigated the limit distribution and its corresponding uniform rate of convergence for a finite mixed of exponential distribution.
If the distribution function (df) F of a random variable ξ have
$$F(x)=p_{1}F_{1}(x)+p_{2}F_{2}(x)+ \cdots+p_{r}F_{r}(x), $$
we say that ξ obeys a finite mixed distribution F, where \(F_{i}\), \(1\leq i\leq r\) stand for different dfs of the mixture components. The weight coefficients satisfy the condition that \(p_{i}>0\), \(i=1,2,\ldots,r\) and \(\sum^{r}_{j=1}p_{j}=1\).
Next, we think of the extreme value distribution from a finite blending with constituent dfs \(F_{k_{i}}\) obeying \(\operatorname{LGMD}(k_{i})\), where the parameter \(k_{i}>1\) for \(1\leq i\leq r\) and \(k_{i}\neq k_{j}\) for \(i\neq j\). Denote the cumulative df of the finite blending by
$$ F(x)=p_{1}F_{k_{1}}(x)+p_{2}F_{k_{2}}(x)+ \cdots+p_{r}F_{k_{r}}(x) $$
(3.4)
for \(x>0\).
Theorem 3.3
Suppose that
\(\{Z_{n},n\geq1\}\)
be a sequence of i.i.d. random variables following the common df
F
given by (3.4). Set
\(M_{n}=\max\{Z_{k},1\leq k\leq n\}\). Now
$$\lim_{n\rightarrow\infty}P \biggl(\frac{M_{n}-\beta_{n}}{\alpha_{n}}\leq x \biggr)=\exp \bigl(-\exp(-x)\bigr) $$
holds with the norming constants
$$\alpha_{n}=\frac{\sigma^{1/k}\exp (2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)} )}{2^{1-1/(2k)}k(\log n)^{1-1/(2k)}} $$
and
$$\begin{aligned} \beta_{n} =&\exp \bigl(2^{1/(2k)}\sigma^{1/k}(\log n)^{1/(2k)} \bigr) \biggl(1+\frac{\sigma^{1/k}(\log n)^{1/(2k)-1}}{2^{2-1/(2k)}k^{2}} \bigl(\log\log n+2k\log p \\ &{}- \bigl(k^{2}-1\bigr)\log2-2k\log\Gamma(1+k/2) \bigr) \biggr), \end{aligned}$$
where
\(\sigma=\max\{\sigma_{1},\ldots,\sigma_{r}\}\), and
\(p=p_{i_{1}}+\cdots+p_{i_{j}}\), \(i_{s}\in\{i,\sigma_{i}=\sigma\textit{ and }k=k_{i}\}\), \(1\leq s\leq j\leq r\), and
\(k=\min\{k_{1},\ldots,k_{r}\}\).
Proof
By (3.4), we have
$$1-F(x)=\sum^{r}_{i=1}p_{i} \bigl(1-F_{k_{i}}(x)\bigr). $$
By Theorem 2.1, we have
$$\begin{aligned}& \sum^{r}_{i=1}\frac{p_{i}\sigma^{2}_{i}}{k_{i}}(\log x)^{1-2k_{i}}xf_{k_{i}}(x) \\& \quad < 1-F(x)< \sum^{r}_{i=1} \frac{p_{i}\sigma^{2}_{i}}{k_{i}}(\log x)^{1-2k_{i}}x \biggl(1+\biggl(\frac{\sigma^{2}_{i}}{k_{i}}(\log x)^{2k_{i}}-1\biggr)^{-1} \biggr)f_{k_{i}}(x) \end{aligned}$$
for all \(x>1\), according to the definition of \(f_{k}\), which implies
$$\begin{aligned}& \frac{p\log x}{2^{\frac{k}{2}}\sigma^{\frac{1}{k}}\Gamma(1+\frac{k}{2})}\exp \biggl(-\frac{(\log x)^{2k}}{2\sigma^{2}} \biggr) \bigl(1+A_{k}(x)\bigr) \\& \quad < 1-F(x) \\& \quad < \frac{p\log x}{2^{\frac{k}{2}}\sigma^{\frac{1}{k}}\Gamma(1+\frac{k}{2})} \biggl(1+\biggl(\frac{\sigma^{2}}{k}(\log x)^{2k}-1\biggr)^{-1} \biggr)\exp \biggl(-\frac{(\log x)^{2k}}{2\sigma^{2}} \biggr) \bigl(1+B_{k}(x)\bigr), \end{aligned}$$
(3.5)
where
$$ A_{k}(x)=\sum_{k_{i}\neq k} \frac{2^{\frac{k}{2}}p_{i}\sigma^{\frac{1}{k}}\Gamma(1+\frac {k}{2})}{2^{\frac{k_{i}}{2}}p\sigma^{\frac{1}{k_{i}}}\Gamma(1+\frac{k_{i}}{2})} \exp \biggl(\frac{(\log x)^{2k}}{2\sigma^{2}}-\frac{(\log x)^{2k_{i}}}{2\sigma^{2}_{i}} \biggr) \rightarrow0 $$
(3.6)
and
$$\begin{aligned} B_{k}(x) =&\sum_{k_{i}\neq k} \frac{2^{\frac{k}{2}}p_{i}\sigma^{\frac{1}{k}}\Gamma(1+\frac {k}{2})}{2^{\frac{k_{i}}{2}}p\sigma^{\frac{1}{k_{i}}}\Gamma(1+\frac{k_{i}}{2})} \frac{1+(\frac{\sigma^{2}_{i}}{k_{i}}(\log x)^{2k_{i}}-1)^{-1}}{1+(\frac{\sigma^{2}}{k}(\log x)^{2k}-1)^{-1}} \exp \biggl(\frac{(\log x)^{2k}}{2\sigma^{2}}- \frac{(\log x)^{2k_{i}}}{2\sigma^{2}_{i}} \biggr) \\ \rightarrow&0 \end{aligned}$$
(3.7)
as \(x\rightarrow\infty\) since \(k=\min\{k_{1},k_{2},\ldots,k_{r}\}\). Combining (3.5)-(3.7) with (2.2), for large enough x, we obtain
$$ 1-F(x)\sim p\bigl(1-F_{k}(x)\bigr) $$
(3.8)
as \(x\rightarrow\infty\), where \(F_{k}\) represents the cdf of the \(\operatorname{LGMD}(k)\), and σ and p are defined by Theorem 3.3. By Proposition 1.19 in Resnick [13], we can derive \(F\in D(\Lambda)\). The norming constants can be obtained by Theorem 3.2 and (3.8). The desired result follows. □