Skip to content

Advertisement

  • Research
  • Open Access

Precise large deviations of aggregate claims in a discrete-time risk model with Poisson ARCH claim-number process

Journal of Inequalities and Applications20162016:140

https://doi.org/10.1186/s13660-016-1080-6

Received: 17 March 2016

Accepted: 2 May 2016

Published: 21 May 2016

Abstract

In this paper, we consider a discrete-time risk model with the claim number following a Poisson ARCH process. In this model, the mean of the current claim number depends on the previous observations. We study the large deviations for the aggregate amount for claims. For a heavy-tailed case, we obtain a precise large deviation formula, which agrees with existing ones in the literature. In computing the moderate deviation principle required by the structure of the claim-number process, our treatment substantially relies on an algorithm specifically designed for the autoregressive structure of our models.

Keywords

  • Poisson ARCH process
  • precise large deviations
  • moderate deviation principle

MSC

  • 60F10
  • 62J05
  • 90B30

1 Introduction

The goal of this paper is studying the precise large deviations for the aggregate claims
$$ S_{n}=\sum_{t=1}^{n} \sum_{j=1}^{N_{t}}X_{t,j} , $$
(1.1)
where \(N_{t}\) is the number of claims in period t and \(\{X_{t,j}, j=1,2,\ldots, t=1,2,\ldots,n\}\) form an array of independent identically distributed (i.i.d.) claim-size random variables independent of \(N_{t}\) with distribution \(F_{X}=1-\overline{F}_{X}\). Here the claim-number process \(\{N_{t},t=1,2,\ldots\}\) is described by the Poisson first-order Autoregressive Conditional Heteroscedasticity (\(\operatorname{ARCH}(1)\)) process, which is defined as
$$ \left \{ \textstyle\begin{array}{l} N_{t}|\mathscr{F}_{t-1}\sim \operatorname{Poisson}(\lambda_{t}), \\ \lambda_{t}=a_{0}+a_{1}N_{t-1}, \end{array}\displaystyle \right . $$
(1.2)
where \(a_{0} > 0\), \(0\leq a_{1}<1\), \(N_{0}\geq0\) is a deterministic integer, and \(\mathscr{F}_{t-1}=\sigma(N_{0}, N_{1}, \ldots, N_{t-1})\) is the σ-field generated by \(\{N_{0},N_{1}, \ldots, N_{t-1}\}\).

In the last few years, research on the time series models for count data has become a popular topic in the literature. Cossette et al. [1] used two integer-valued time series, namely the Poisson moving average (MA) and Poisson autoregressive (AR) processes, to model the claim frequency in the risk model. Li [2] proposed a discrete-time risk model with the claim number being an integer-valued ARCH (INARCH) process with Poisson deviates, namely the model (1.1) and derived some statistical properties and adjustment coefficient for the risk model.

The Poisson INARCH process was first considered by Rydberg and Shephard [3] and applied to finance to model the number of transactions taking place during a short time interval. In model (1.2), it is assumed that the conditional mean of the current claim number has a linear relationship with the previous values of observations. Streett [4] and Ferland et al. [5] point out that the process \(N_{t}\) in (1.2) is stationary if \(0\leq a_{1} < 1\). In particular, the expectation and variance of \(N_{t}\) are
$$\mathbb{E}N_{t}=\frac{a_{0}}{1-a_{1}} \quad \mbox{and} \quad \operatorname{Var} (N_{t})=\frac {\mathbb{E} N_{t}}{1-a_{1}^{2}}. $$
We study the precise large deviations for \(\{S_{n} , n \geq0\}\) in (1.1). We are only interested in the case of heavy-tailed claims. Heavy-tailed distributions belong to the core issues in actuarial science, because they are more in accordance with claims’ reality than light-tailed ones. A useful heavy-tailed class is the class \(\mathscr{C}\) of distribution functions with consistent variation (also called intermediate regular variation), characterized by the relations \(\overline{F}(x)>0\) for all \(x\geq0\) and
$$\lim_{y\uparrow1}\limsup_{x\rightarrow\infty}\frac {\overline{F}(xy)}{\overline{F}(x)}= \lim_{y\downarrow1}\liminf_{x\rightarrow\infty }\frac{\overline{F}(xy)}{\overline{F}(x)}=1. $$
Our main result is given below.

Theorem 1.1

Consider the aggregate amount of claims (1.1), assume that \(F_{X}(x)\in\mathscr{C}\), \(\mathbb{E}X=\mu\in(0,\infty)\), and \(N_{0}\geq0\) a deterministic integer. Then, for every fixed \(\gamma>0\), uniformly for all \(x\geq\gamma n\),
$$ \mathbb{P}\biggl(S_{n}-\frac{a_{0}}{1-a_{1}}n\mu >x\biggr) \sim\frac{na_{0}}{1-a_{1}} \overline{F}_{X}(x),\quad n\rightarrow\infty. $$
(1.3)

A precise large deviation is an important study task in applied probability, and it is usually used to quantitatively characterize the property of extremal events. As is well known, there is a vast amount of literature studying the asymptotic behavior of the large deviation of the risk models in the presence of heavy-tailed claim sizes. See, e.g., Klüppelberg and Mikosch [6], Ng et al. [7], Leipus and Šiaulys [8], Asmussen [9], Liu [10], Yang et al. [11], Chen and Yuen [12], among many others. On the other hand, the precise large deviations to the risk model with the claim number being a Poisson ARCH process has not been considered in the literature.

We now comment on the approaches used in this work. Our method is much more elementary and does not use the classical treatment in the area of precise large deviation. For convenience in application, we first show that the accumulated aggregate claim \(S_{n}\) in (1.1) has the same distribution with another random walk. Second, we establish the moderate deviation principle (MDP) for the partial sum \(\sum_{t=1}^{n} N_{t}\) generated by the Poisson \(\operatorname{ARCH}(1)\) process \({N_{t} }\) defined in (1.2). As a consequence of MDP, we are able to claim the genuine exponential decay for the probability that the sample average deviates away from its mathematical equilibrium value. This property is a crucial step of our proof of Theorem 1.1. Finally, we would like to point out that equation (1.3) agrees with existing ones in the literature. This indicates that the dependence structure of \(N_{t}\) defined by (1.2) does not affect the asymptotic behavior of the large deviations of \(\{S_{n}, n\geq0\}\).

The rest of the paper is organized as follows. Section 2 recalls various preliminaries and prepares a few lemmas. Section 3 presents the proof of the main result by establishing the corresponding asymptotic lower and upper bounds.

2 Preliminaries

Throughout this paper, for two positive functions \(f(x)\) and \(g(x)\), we write
$$\begin{aligned}& f(x)\sim g(x) \quad \mbox{if }\lim_{x\rightarrow\infty} f(x)/g(x)=1; \\& f(x)\lesssim g(x)\quad \mbox{if }\limsup_{x\rightarrow\infty} f(x)/g(x)\leqslant1\quad \mbox{and} \\& f(x)\gtrsim g(x)\quad \mbox{if } \liminf_{x\rightarrow\infty} f(x)/g(x)\geqslant1. \end{aligned}$$
For two positive bivariate functions \(f(x,n)\) and \(g(x,n)\), we say that the asymptotic relation \(f(x,n)\lesssim g(x,n)\) holds uniformly for x in a nonempty set \(\Delta_{n}\) if
$$\limsup_{n\rightarrow\infty}\sup_{x\in\Delta_{n}} \frac{f(x;n)}{g(x;n)} \leqslant1. $$

First, we show that the accumulated aggregate claim \(S_{n}\) in (1.1) has the same distribution with another random walk.

Lemma 2.1

Let \(\{Y, Y_{j},j\geqslant1\}\) be a sequence of i.i.d. non-negative random variables such that Y and X in (1.1) are identically distributed, then
$$S_{n} \stackrel{d}{=}\sum_{j=1}^{N_{1}+N_{2}+\cdots+N_{n}}Y_{j}, $$
where \(\stackrel{d}{=}\) denotes the identical distribution.

Proof

For any real r, the moment generating function of \(S_{n}\) is expressed as
$$\begin{aligned} M_{s} =&E\bigl\{ \exp\{rS_{n}\}\bigr\} =E\Biggl\{ \exp\Biggl\{ r\sum_{i=1}^{n}\sum _{j=1}^{N_{i}}X_{i,j}\Biggr\} \Biggr\} \\ =&E\biggl\{ e^{r\sum _{j=1}^{N_{1}}X_{1,j}}e^{r\sum _{j=1}^{N_{2}}X_{2,j}}\cdots e^{r\sum _{j=1}^{N_{n}}X_{k,j}}\cdot\sum _{n_{1},n_{2},\ldots ,n_{n}}I_{(N_{1}=n_{1},N_{2}=n_{2},\ldots, N_{n}=n_{n})}\biggr\} \\ =&\sum_{n_{1},n_{2},\ldots ,n_{n}}E\bigl\{ e^{r\sum _{j=1}^{n_{1}}X_{1,j}}e^{r\sum _{j=1}^{n_{2}}X_{2,j}} \cdots e^{r\sum _{j=1}^{n_{n}}X_{n,j}}\cdot I_{(N_{1}=n_{1},N_{2}=n_{2},\ldots, N_{n}=n_{n})}\bigr\} \\ =&\sum_{n_{1},n_{2},\ldots ,n_{n}}\bigl(Ee^{rX} \bigr)^{\sum _{i=1}^{n}n_{i}}\cdot P\{ N_{1}=n_{1},N_{2}=n_{2}, \ldots, N_{n}=n_{n}\} \\ =&E\bigl\{ (M_{X})^{N_{1}+N_{2}+\cdots+ N_{n}}\bigr\} . \end{aligned}$$
On the other hand, we have
$$\begin{aligned}& E\Biggl\{ \exp\Biggl\{ r\sum_{j=1}^{N_{1}+N_{2}+\cdots+N_{n}}Y_{j} \Biggr\} \Biggr\} \\& \quad = E\Biggl\{ \exp\Biggl\{ r\sum_{j=1}^{N_{1}+N_{2}+\cdots+N_{n}}Y_{j} \Biggr\} \cdot\sum_{m}I_{(N_{1}+N_{2}+\cdots +N_{n}=m)}\Biggr\} \\& \quad = \sum_{m}E\Biggl\{ \exp\Biggl\{ r\sum _{j=1}^{m}Y_{j}\Biggr\} \cdot I_{(N_{1}+N_{2}+\cdots +N_{n}=m)}\Biggr\} \\& \quad =\sum_{m}\bigl(Ee^{rY} \bigr)^{m}\cdot P\{N_{1}+N_{2}+\cdots +N_{n}=m\} =E\bigl\{ (M_{Y})^{N_{1}+N_{2}+\cdots+ N_{n}}\bigr\} , \end{aligned}$$
where \(m=n_{1}+n_{2}+\cdots+n_{n}\). Hence, by the uniqueness of the moment generating function, we know that \(S_{k}\) and \(\sum_{j=1}^{N_{1}+\cdots+N_{n}}Y_{j}\) have the same distribution. □

Next, the following lemma establishes the MDP for \(\{N_{t}, t\geq0\}\).

Lemma 2.2

Assume \(\{N_{t}, t\geq1\}\) defined by (1.2), and \(N_{0}\ge0\) be a deterministic integer. Let \(b_{n}\) be a sequence of positive numbers satisfying \(b_{n}\rightarrow\infty\) and \(b_{n}/n\rightarrow0\), we have
$$ \limsup_{n\rightarrow\infty}\frac{1}{b_{n}}\log \mathbb{P} \Biggl\{ \frac{1}{\sqrt{nb_{n}}}\sum_{t=1}^{n} \biggl(N_{t}-{a_{0}\over 1-a_{1}} \biggr)\in H \Biggr\} \leqslant- \inf_{x\in H}I_{M}(x) $$
(2.1)
for each closed set \(H\subset \mathbb{R}\); and
$$ \liminf_{n\rightarrow\infty}\frac{1}{b_{n}}\log \mathbb{P} \Biggl\{ \frac{1}{\sqrt{nb_{n}}}\sum_{t=1}^{n} \biggl(N_{t}-{a_{0}\over 1-a_{1}} \biggr)\in G \Biggr\} \geqslant- \inf_{x\in G}I_{M}(x) $$
(2.2)
for each open set \(G\in\mathbb{R}\), where the rate function \(I_{M}(\cdot)\) is given as
$$I_{M}(x)={x^{2}\over 2\sigma^{2}} ,\quad x\in\mathbb{R}, \textit{where } \sigma^{2} =\mathbb{E}\bigl(\operatorname{Var}(N_{t}| \mathscr{F}_{t-1})\bigr)=\frac{a_{0}}{1-a_{1}}. $$

Proof

By the Gärtner-Ellis theorem (Theorem 2.3.6, p.44, Dembo and Zeitouni [13]), all we need to show is that
$$ \lim_{n\to\infty}{1\over b_{n}}\log\mathbb{E} \exp \Biggl\{ \beta \sqrt{b_{n}\over n} \sum_{t=1}^{n}(N_{t}- \mathbb{E}N_{1}) \Biggr\} ={1\over 2}\sigma^{2} \beta^{2},\quad \beta\in\mathbb{R}. $$
(2.3)
Let \(\beta\in\mathbb{R}\) be fixed but arbitrary and write
$$l_{n}=a_{1}\bigl(e^{\theta_{n}}-1\bigr), $$
where \(\theta_{n}=\beta\sqrt{\frac{b_{n}}{n}}\). Observe that for any \(t\ge1\),
$$\begin{aligned}& \mathbb{E} \bigl[\exp \bigl\{ \theta_{n} (N_{t}-\mathbb {E}N_{1})-l_{n}N_{t-1} \bigr\} \vert{ \mathcal{F}}_{t-1} \bigr] \\& \quad =\exp \{-l_{n}N_{t-1}-\theta_{n} \mathbb{E}N_{1} \}\mathbb{E} \bigl[\exp\{\theta_{n} N_{t} \} \vert{\mathcal{F}}_{t-1} \bigr] \\& \quad =\exp \{-l_{n}N_{t-1}-\theta_{n}\mathbb{E} N_{1} \}\exp \bigl\{ (a_{0}+a_{1}N_{t-1}) \bigl(e^{\theta_{n}}-1\bigr) \bigr\} \\& \quad =\exp \biggl\{ \frac{a_{0}}{1-a_{1}}\bigl[(1-a_{1}) \bigl(e^{\theta_{n}}-1\bigr)-\theta_{n}\bigr] \biggr\} . \end{aligned}$$
Hence,
$$\begin{aligned}& \mathbb{E}\exp \Biggl\{ \sum_{t=1}^{n+1} \bigl\{ \theta_{n} (N_{t}-\mathbb{E} N_{1})-l_{n}N_{t-1} \bigr\} \Biggr\} \\& \quad = \biggl(\exp \biggl\{ \frac{a_{0}}{1-a_{1}}\bigl[(1-a_{1}) \bigl(e^{\theta_{n}}-1\bigr)-\theta _{n}\bigr] \biggr\} \biggr)^{n+1}. \end{aligned}$$
(2.4)
On the other hand,
$$\begin{aligned}& \mathbb{E}\exp \Biggl\{ \sum_{t=1}^{n+1} \bigl\{ \theta_{n} (N_{t}-\mathbb{E} N_{1})-l_{n}N_{t-1} \bigr\} \Biggr\} \\& \quad =\exp \bigl\{ -(n+1)l_{n}\mathbb{E}N_{1} \bigr\} \mathbb{E} \Biggl\{ \exp \bigl\{ \theta_{n}N_{n+1}-l_{n}N_{0} \bigr\} \exp \Biggl\{ (\theta_{n}-l_{n} )\sum _{t=1}^{n} (N_{t}-\mathbb{E}N_{1}) \Biggr\} \Biggr\} \\& \quad =\exp \bigl\{ -(n+1) l_{n}\mathbb{E}N_{1} \bigr\} \mathbb{E}\exp \Biggl\{ \theta_{n}N_{n+1}+ ( \theta_{n}-l_{n} )\sum_{t=1}^{n} (N_{t}-\mathbb{E}N_{1}) \Biggr\} . \end{aligned}$$
(2.5)
Combining (2.4) and (2.5) and by the definition of \(l_{n}\),
$$\begin{aligned}& \biggl(\exp \biggl\{ \frac{a_{0}}{1-a_{1}}\bigl[(1-a_{1}) \bigl(e^{\theta_{n}}-1\bigr)-\theta _{n}\bigr] \biggr\} \biggr)^{n+1} \bigl(\exp \bigl\{ a_{1}\bigl(e^{\theta_{n}}-1 \bigr)\mathbb{E} N_{1} \bigr\} \bigr)^{n+1} \\& \quad = \biggl(\exp \biggl\{ \frac{a_{0}}{1-a_{1}}\bigl[e^{\theta_{n}}-1- \theta_{n}\bigr] \biggr\} \biggr)^{n+1} \\& \quad = \mathbb{E}\exp \Biggl\{ \theta_{n}N_{n+1}+ ( \theta_{n}-l_{n} )\sum_{t=1}^{n} (N_{t}-\mathbb{E}N_{1}) \Biggr\} . \end{aligned}$$
By the Taylor expansion \(e^{\theta_{n}}=1+\theta_{n}+\frac{1}{2}\theta_{n}^{2}+o(\theta_{n}^{2})\), the right-hand side is asymptotically equivalent to
$$\exp \biggl\{ {1\over 2}\sigma^{2} \beta^{2} b_{n} \biggr\} . $$
Thus
$$\lim_{n\to\infty}{1\over b_{n}}\log\mathbb{E}\exp \Biggl\{ \theta_{n}N_{n+1}+ (\theta_{n}-l_{n} )\sum _{t=1}^{n} (N_{t}- \mathbb{E}N_{1}) \Biggr\} ={1\over 2}\sigma^{2} \beta^{2}. $$
By the fact that \(\sup_{t\geq1}\mathbb{E}\exp\{\theta N_{t}\} <\infty\) (\(\forall\theta>0\)) (see Li [2]) and \(\theta_{n}\to0\), a standard argument of an exponential approximation by the Hölder inequality enables us to remove the term \(\theta_{n}N_{n+1}\) from the above equation. So we have
$$ \lim_{n\to\infty}{1\over b_{n}}\log\mathbb{E} \exp \Biggl\{ (\theta_{n}-l_{n} )\sum _{t=1}^{n} (N_{t}-\mathbb{E}N_{1}) \Biggr\} ={1\over 2}\sigma^{2} \beta^{2}. $$
(2.6)
By the Hölder inequality, therefore,
$$\begin{aligned}& \mathbb{E}\exp \Biggl\{ (\theta_{n}-l_{n} )\sum _{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} \\& \quad \le \Biggl(\mathbb{E}\exp \Biggl\{ \theta_{n}\sum _{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} \Biggr)^{\theta_{n}-l_{n}\over \theta_{n}}\le\mathbb{E}\exp \Biggl\{ \theta_{n}\sum_{t=1}^{n} (N_{t}-\mathbb{E}N_{1}) \Biggr\} , \end{aligned}$$
where the second step follows from the fact that
$$\mathbb{E}\exp \Biggl\{ \theta_{n}\sum_{t=1}^{n} (N_{t}-\mathbb {E}N_{1}) \Biggr\} \ge1, $$
which can be proved by Jensen’s inequality.
By the fact that \(\theta_{n}=\beta\sqrt{b_{n}\over n}\) and by (2.6), we obtain the lower bound
$$ \liminf_{n\to\infty}{1\over b_{n}}\log \mathbb{E}\exp \Biggl\{ \beta\sqrt{b_{n}\over n}\sum _{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} \ge{1\over 2}\sigma^{2} \beta^{2}. $$
(2.7)
On the other hand, given a small number \(0<\delta<1\), \(\theta_{n}-l_{n} >(1-\delta)\theta_{n} =(1-\delta)\beta\sqrt{b_{n}\over n} \) as n is sufficiently large. By the Hölder inequality
$$\begin{aligned}& \mathbb{E}\exp \Biggl\{ (1-\delta)\beta\sqrt{b_{n}\over n}\sum _{t=1}^{n} (N_{t}-\mathbb{E}N_{1}) \Biggr\} \\& \quad \le \Biggl(\mathbb{E}\exp \Biggl\{ (\theta_{n}-l_{n} ) \sum_{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} \Biggr)^{(1-\delta)\theta_{n}\over \theta_{n}-l_{n}} \le \mathbb{E}\exp \Biggl\{ ( \theta_{n}-l_{n} )\sum_{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} . \end{aligned}$$
By (2.6), therefore,
$$\limsup_{n\to\infty}{1\over b_{n}}\log \mathbb{E}\exp \Biggl\{ (1-\delta)\beta\sqrt{b_{n}\over n}\sum_{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} \le{1\over 2} \sigma^{2} \beta^{2}. $$
Since \(\beta\in\mathbb{R}\) can be arbitrary, replacing it by \((1-\delta)^{-1}\beta\) in the above leads to
$$\limsup_{n\to\infty}{1\over b_{n}}\log \mathbb{E}\exp \Biggl\{ \beta\sqrt{b_{n}\over n}\sum_{t=1}^{n} (N_{t}-\mathbb{E} N_{1}) \Biggr\} \le{1\over 2} \sigma^{2} \biggl({\beta\over 1-\delta} \biggr)^{2}. $$
Letting \(\delta\to0^{+}\) on the right-hand side yields the desired upper bound, which, together with the lower bound (2.7), leads to (2.3). □
As a consequence of Lemma 2.2, for every \(\eta>0\), considering the closed set \(H=\{x,|x|\geq\eta\}\) and \(b_{n}=\sqrt{n}\), we have
$$ \mathbb{P} \Biggl(\frac{1}{n}\Biggl\vert \sum _{t=1}^{n}(N_{t}-\mathbb {E}N_{1})\Biggr\vert \geq \eta \Biggr)\leq\mathbb{P} \Biggl( \frac{1}{n^{3/4}}\Biggl\vert \sum_{t=1}^{n}(N_{t}- \mathbb{E}N_{1})\Biggr\vert \geq\eta \Biggr) \le \exp \{-c_{\eta}\sqrt{n}\}, $$
(2.8)
where \(c_{\eta}=\frac{\eta^{2}}{2\sigma^{2}}>0\) is independent of n. This gives the genuine exponential decay for the probability that the sample average deviates from its expectation.

The last lemma below is a restatement of Theorem 3.1 of Ng et al. [7].

Lemma 2.3

Let \(\{Y, Y_{j}, j\geqslant1\}\) be a sequence of i.i.d. non-negative random variables with common distribution function \(F_{Y}\in\mathscr{C}\) and finite expectation μ, let \(Q_{n}=\sum_{j=1}^{n}Y_{j}\). Then, for any fixed \(\gamma>0\),
$$ P(Q_{n}-n\mu>y)\sim n \overline{F}_{Y}(y)\quad (n \rightarrow\infty) \textit{ uniformly for } y\geq\gamma n. $$

3 Proof of Theorem 1.1

Throughout this section, unless otherwise stated, every limit relation is understood as valid uniformly for all \(x\geq\gamma n\) as \(n\rightarrow\infty\). Trivially, equation (1.3) amounts to the conjunction of
$$ \begin{aligned} &\mathbb{P}\biggl(S_{n}-\frac{a_{0}}{1-a_{1}}n \mu >x\biggr)\lesssim\frac{na_{0}}{1-a_{1}} \overline{F}_{X}(x)\quad \text{and} \\ &\mathbb{P}\biggl(S_{n}-\frac{a_{0}}{1-a_{1}}n\mu >x\biggr)\gtrsim \frac{na_{0}}{1-a_{1}} \overline{F}_{X}(x), \end{aligned} $$
(3.1)
which will be proven separately in the following two subsections.

3.1 Proof of the first relation (3.1)

Write \(\nu=\mathbb{E}N_{1}=\frac{a_{0}}{1-a_{1}}\). For arbitrarily fixed, but small η, \(0<\eta<1\), by Lemma 2.1, we derive
$$\begin{aligned}& \mathbb{P}(S_{n}-\nu n\mu>x) \\& \quad = \mathbb{P}\Biggl(\sum_{j=1}^{N_{1}+\cdots+N_{n}}Y_{j}- \nu n\mu >x\Biggr) \\& \quad = \mathbb{P}\Biggl\{ \sum_{t=1}^{n}N_{t}< n(\nu+\eta), \sum_{j=1}^{N_{1}+\cdots+N_{n}}Y_{j}- \nu n\mu>x\Biggr\} \\& \qquad {} +\mathbb{P}\Biggl\{ \sum_{t=1}^{n}N_{t} \geq n(\nu+\eta), \sum_{j=1}^{N_{1}+\cdots+N_{n}}Y_{j}- \nu n\mu>x\Biggr\} \\& \quad \leq \mathbb{P}\Biggl(\sum_{j=1}^{[n(\nu+\eta)]}Y_{j}- \nu n\mu>x\Biggr)+\mathbb{P}\Biggl\{ \sum_{t=1}^{n}N_{t} \geq n(\nu+\eta), \sum_{j=1}^{N_{1}+\cdots+N_{n}}Y_{j}- \nu n\mu>x\Biggr\} \\& \quad = \Delta_{1}+\Delta_{2}, \end{aligned}$$
(3.2)
where \([\cdot]\) denotes the integral part of , throughout this paper.
From Lemma 2.3, we see that, for some small η such that \(\gamma-\eta\mu>0\),
$$\begin{aligned} \begin{aligned}[b] \Delta_{1}&= \mathbb{P}\Biggl(\sum _{j=1}^{[n(\nu+\eta)]}Y_{j}-\bigl[n(\nu+\eta)\bigr]\mu >x+n\nu\mu-\bigl[n(\nu+\eta)\bigr]\mu\Biggr) \\ &\sim \bigl[n(\nu+\eta)\bigr]\overline{F}_{Y}\bigl(x+n\nu\mu- \bigl[n(\nu+\eta)\bigr]\mu \bigr) \\ &\leq n(\nu+\eta)\overline{F}_{X}\bigl(x(1-\eta\mu/\gamma)\bigr). \end{aligned} \end{aligned}$$
(3.3)
As for the second term,
$$\begin{aligned} \Delta_{2} =&\mathbb{P} \Biggl\{ \sum_{t=1}^{n}N_{t} \geqslant n(\nu+\eta), \sum_{j=1}^{N_{1}+\cdots+N_{n}}Y_{j}-n \nu\mu>x \Biggr\} \\ =&\sum_{m\geq n(\nu+\eta)}\mathbb{P} \Biggl\{ \sum _{t=1}^{n} N_{t}=m \Biggr\} \mathbb{P} \Biggl\{ \sum_{j=1}^{m}Y_{j}-n\nu \mu>x \Biggr\} \\ \leq&\sum_{m\geq n(\nu+\eta)}\mathbb{P} \Biggl\{ \sum _{t=1}^{n} N_{t}=m \Biggr\} \mathbb{P} \Biggl\{ \sum_{j=1}^{m}Y_{j}>x \Biggr\} \\ \leq&\sum_{m\geq n(\nu+\eta)}\mathbb{P} \Biggl\{ \sum _{t=1}^{n} N_{t}=m \Biggr\} \sum _{j=1}^{m}\mathbb{P} \biggl\{ Y_{j}> \frac{x}{m} \biggr\} \\ \leq&\sum_{m\geq n(\nu+\eta)}\mathbb{P} \Biggl\{ \sum_{t=1}^{n} N_{t}=m \Biggr\} m\overline{F}_{Y} \biggl({x\over m} \biggr). \end{aligned}$$
By the assumption that \(F_{Y}=F_{X}\in\mathscr{C}\) there is a constant \(C>0\) independent of m, and for every \(p>J^{+}_{F}\) such that \(\overline{F}_{X} ({x\over m} )\leq Cm^{p}\overline{F}_{X}(x)\) for all m and sufficiently large x (see Lemma 2.2, Ng et al. [7]), where \(J^{+}_{F}\) is called the upper Matuszewska index. Hence
$$\begin{aligned} \Delta_{2} \leq& C\overline{F}_{X}(x)\sum _{m\geq n(\nu+\eta)}m^{p+1}\mathbb{P} \Biggl\{ \sum _{t=1}^{n} N_{t}=m \Biggr\} \\ \leq& C\overline{F}_{X}(x)\mathbb{E} \Biggl\{ \Biggl(\sum _{t=1}^{n} N_{t}\Biggr)^{p+1} \mathbb {1}_{\{\sum_{t=1}^{n} N_{t}\geq n(\nu+\eta)\}} \Biggr\} . \end{aligned}$$
(3.4)
We now claim that
$$ \lim_{n\rightarrow\infty}\mathbb{E} \Biggl\{ \Biggl(\sum _{t=1}^{n} N_{t} \Biggr)^{p+1} \mathbb {1}_{\{\sum_{t=1}^{n} N_{t}\geq n(\nu+\eta)\}} \Biggr\} =0. $$
(3.5)
Indeed, by the Cauchy-Schwarz inequality
$$\begin{aligned}& \mathbb{E} \Biggl\{ \Biggl(\sum_{t=1}^{n} N_{t}\Biggr)^{p+1} \mathbb {1}_{\{\sum_{t=1}^{n} N_{t}\geq n(\nu+\eta)\}} \Biggr\} \\& \quad \leq \Biggl\{ \mathbb{E}\Biggl(\sum_{t=1}^{n} N_{t}\Biggr)^{2(p+1)}\Biggr\} ^{1/2} \Biggl\{ \mathbb{P} {\Biggl(\sum_{t=1}^{n} N_{t}\geq n(\nu+\eta)\Biggr)}\Biggr\} ^{1/2}. \end{aligned}$$
Using (2.8), we have
$$ \mathbb{P} {\Biggl(\sum_{t=1}^{n} N_{t}\geq n(\nu+\eta)\Biggr)}\leq\mathbb{P} \Biggl(\frac{1}{n} \Biggl\vert \sum_{t=1}^{n}(N_{t}- \nu)\Biggr\vert \geq\eta \Biggr) \le\exp\{ -c_{\eta}\sqrt{n}\}, $$
where \(c_{\eta}=\frac{\eta^{2}}{2\sigma^{2}}>0\) is independent of n. This together with fact that \(\mathbb{E}(\sum_{t=1}^{n} N_{t})^{2(p+1)}=O(n^{2(p+1)}) \) proves (3.5).
Hence, by (3.4) and (3.5), we have
$$ \Delta_{2}\leq o\bigl(\overline{F}_{X}(x) \bigr)=o\bigl(n\nu\overline{F}_{X}(x)\bigr). $$
(3.6)
Substituting (3.3) and (3.6) into (3.2) yields
$$ \mathbb{P}(S_{n}-n\nu\mu>x)\lesssim n(\nu+\eta)\overline{F}_{X} \bigl(x(1-\eta\mu/\gamma)\bigr)+o\bigl(n\nu\overline{F}_{X}(x)\bigr). $$
By the arbitrariness of η and the condition \(F_{X}\in\mathscr{C}\), we obtain the first relation (3.1).

3.2 Proof of the second relation (3.1)

Let \(0<\eta<1\) be arbitrarily fixed with η small. Consider the decomposition
$$\begin{aligned} \mathbb{P}(S_{n}-n\nu\mu>x) =&\mathbb{P}\Biggl(\sum _{j=1}^{N_{1}+N_{2}+\cdots+N_{n}}Y_{j}-n\nu\mu>x\Biggr) \\ \geq&\mathbb{P}\Biggl(\sum_{t=1}^{n}N_{t}> n(\nu-\eta), \sum_{j=1}^{N_{1}+N_{2}+\cdots+N_{n}}Y_{j}-n \nu\mu>x\Biggr) \\ \geq&\mathbb{P}\Biggl(\sum_{t=1}^{n}N_{t}> n(\nu-\eta)\Biggr)\cdot \mathbb{P}\Biggl( \sum_{j=1}^{[n(\nu-\eta)]}Y_{j}-n \nu\mu>x\Biggr). \end{aligned}$$
(3.7)
Applying (2.8) we have
$$ \mathbb{P}\Biggl\{ \sum_{t=1}^{n}N_{t}> n(\nu-\eta)\Biggr\} \geq \mathbb{P}\Biggl\{ \frac{1}{n}\Biggl\vert \sum _{t=1}^{n}(N_{t}-\nu)\Biggr\vert < \eta\Biggr\} \rightarrow1, \quad \mbox{as } n\rightarrow\infty. $$
(3.8)
By Lemma 2.3, have
$$\begin{aligned} \mathbb{P}\Biggl( \sum_{j=1}^{[n(\nu-\eta)]}Y_{j}-n \nu\mu>x\Biggr) \sim& \bigl[n(\nu-\eta)\bigr]\overline{F}_{Y}\bigl(x+n \nu\mu-\bigl[n(\nu-\eta)\bigr]\mu \bigr) \\ \geq& n(\nu-\eta)\frac{[n(\nu-\eta)]}{n(\nu-\eta)}\overline {F}_{X}\bigl(x(1+\eta \mu/\gamma)\bigr). \end{aligned}$$
(3.9)
Finally, substituting (3.8) and (3.9) into (3.7) yields
$$\mathbb{P} (S_{n}-n\nu\mu>x )\gtrsim n(\nu-\eta)\overline{F}_{X} \bigl(x(1+\eta\mu/\gamma)\bigr). $$
By the arbitrariness of η and the condition \(F_{X}\in\mathscr{C}\), we obtain the second relation (3.1).

Declarations

Acknowledgements

This work is supported by National Natural Science Foundation of China (No. 11571138).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Science, Qiqihar University, Qiqihar, China

References

  1. Cossette, H, Marceau, E, Maume-Deschamps, V: Discrete-time risk models based on time series for count random variables. ASTIN Bull. 1, 123-150 (2010) MathSciNetView ArticleMATHGoogle Scholar
  2. Li, JH: On discrete-time risk models with dependence based on integer-valued time series processes. Master thesis, University of Hong Kong, Hong Kong (2012) Google Scholar
  3. Rydberg, TH, Shephard, N: BIN Models for Trade-by-Trade Data. Modelling the Number of Trades in a Fixed Interval of Time. Technical report 0740, Econometric Society. http://ideas.repec.org/p/ecm/wc2000/0740.html (2000). Accessed 25 Jul 2006
  4. Streett, S: Some observation driven models for time series of counts. PhD thesis, Department of Statistics, Colorado State University (2000) Google Scholar
  5. Ferland, R, Latour, A, Oraichi, D: Integer-valued GARCH process. J. Time Ser. Anal. 27, 923-942 (2006) MathSciNetView ArticleMATHGoogle Scholar
  6. Klüppelberg, C, Mikosch, T: Large deviations of heavy-tailed random sums with applications in insurance and finance. J. Appl. Probab. 34, 293-308 (1997) MathSciNetView ArticleMATHGoogle Scholar
  7. Ng, KW, Tang, Q, Yan, J, Yang, H: Precise large deviations for sums of random variables with consistently varying tails. J. Appl. Probab. 41, 93-107 (2004) MathSciNetView ArticleMATHGoogle Scholar
  8. Leipus, R, Šiaulys, J: Asymptotic behaviour of the finite-time ruin probability under subexponential claim sizes. Insur. Math. Econ. 40, 498-508 (2007) MathSciNetView ArticleMATHGoogle Scholar
  9. Asmussen, S: Ruin Probabilities. Advanced Series on Statistical Science and Applied Probability, vol. 2. World Scientific, River Edge (2000) MATHGoogle Scholar
  10. Liu, L: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79, 1290-1298 (2009) MathSciNetView ArticleMATHGoogle Scholar
  11. Yang, Y, Leipus, R, Šiaulys, J: Uniform estimates for the finite-time ruin probability in the dependent renewal risk model. J. Math. Anal. Appl. 1, 215-225 (2011) MathSciNetView ArticleMATHGoogle Scholar
  12. Chen, YQ, Yuen, KC: Precise large deviations of aggregate claims in a size-dependent renewal risk model. Insur. Math. Econ. 51, 457-461 (2012) MathSciNetView ArticleMATHGoogle Scholar
  13. Dembo, A, Zeitouni, O: Large Deviations Techniques and Applications, 2nd edn. Springer, New York (1998) View ArticleMATHGoogle Scholar

Copyright

© Yu 2016

Advertisement