Skip to main content

Advertisement

Precise large deviations for the aggregate claims in a dependent compound renewal risk model

Article metrics

  • 93 Accesses

Abstract

We consider a dependent compound renewal risk model, where the interarrival times of accidents and the claim numbers follow a dependence structure characterized by a conditional tail probability and the claim sizes have a pairwise negatively quadrant dependence structure or a related dependence structure with the upper tail asymptotical dependence structure. When the distributions of the claim sizes belong to the dominated variation distribution class, we give the asymptotic lower and upper bounds for the precise large deviations of the aggregate claims.

Introduction

Consider the compound renewal risk model where the claim sizes \(\{X_{ij},j\geq 1\}\) caused by the ith (\(i\geq 1\)) accident form a sequence of nonnegative random variables (r.v.s) with finite mean. The interarrival times of the accidents \(\{\theta _{i},i\geq 1\}\) are positive independent identically distributed (i.i.d.) r.v.s with finite mean \(\lambda ^{-1}\). Then the renewal counting process is

$$ N(t)=\sup \Biggl\{ n\geq 1: \tau _{n}=\sum _{i=1}^{n}\theta _{i}\leq t \Biggr\} ,\quad t\geq 0, $$

with a mean function \(\lambda (t)=EN(t)\), \(t\geq 0\), such that \(\lambda (t)/\lambda t\to 1\) as \(t\to \infty \). Let \(\{Y_{i},i\geq 1 \}\) be the claim numbers caused by the successive accidents, which are a sequence of i.i.d. positive integer-valued r.v.s with finite mean ν. We assume that the r.v.s \(\{Y_{i},i\geq 1\}\) are bounded, that is, there exists a finite integer number \(h>0\) such that \(Y_{i}\leq h\), \(i\geq 1\). Thus the aggregate claims accumulated up to time \(t\geq 0\) are expressed as

$$\begin{aligned} S(t)=\sum_{i=1}^{N(t)}\sum _{j=1}^{Y_{i}}X_{ij}. \end{aligned}$$
(1.1)

In this paper, we investigate the precise large deviations for the random sums \(S(t)\).

Throughout this paper, all limit relationships hold as \(t\to \infty \) unless stated otherwise. For two positive functions \(a(x)\) and \(b(x)\), we write \(a(x)\lesssim b(x)\) if \(\limsup_{x\to \infty } a(x)/b(x) \leq 1\), \(a(x)\gtrsim b(x)\) if \(\liminf_{x\to \infty } a(x)/b(x) \geq 1\), \(a(x)\sim b(x)\) if both \(a(x)\lesssim b(x)\) and \(a(x)\gtrsim b(x)\), and \(a(x)=o(b(x))\) if \(\lim_{x\to \infty } a(x)/b(x)=0\). For a real number y, its integer part is denoted by \(\lfloor y\rfloor \), and \(\mathbf{1}_{B}\) denotes the indicator function of an event B.

Heavy-tailed distribution classes

In this subsection, we introduce some related heavy-tailed distribution classes. For a proper distribution V on \((-\infty , \infty )\), let \(\overline{V}=1-V\) be its (right) tail. A distribution V on \((-\infty , \infty )\) is said to be heavy-tailed if

$$\begin{aligned} \int _{-\infty }^{\infty }e^{\lambda x}V(dx)=\infty \quad \text{for all } \lambda >0, \end{aligned}$$

that is, if V has no any positive exponential moment. Otherwise, V is said to be light-tailed.

The dominated variation distribution class is an important class of heavy-tailed distributions, denoted by \(\mathscr{D}\). A distribution V on \((-\infty , \infty )\) belongs to the class \(\mathscr{D}\) if for any \(y\in (0,1)\),

$$ \limsup_{x\to \infty }\frac{\overline{V}(xy)}{\overline{V}(x)}< \infty . $$

A slightly smaller class is the consistent variation distribution class, denoted by \(\mathscr{C}\). A distribution V on \((-\infty , \infty )\) belongs to the class \(\mathscr{C}\) if

$$ \lim_{y\nearrow 1}\limsup_{x\to \infty } \frac{{\overline{V}}(xy)}{ {\overline{V}}(x)}=1\quad \text{or, equivalently,}\quad \lim _{y\searrow 1}\liminf_{x\to \infty }\frac{{\overline{V}}(xy)}{{\overline{V}}(x)}=1. $$

Another related distribution class is the long-tailed distribution class, denoted by \(\mathscr{L}\). A distribution V on \((-\infty , \infty )\) belongs to the class \(\mathscr{L}\) if for any \(y>0\),

$$ \lim_{x\to \infty }\frac{\overline{V}(x+y)}{\overline{V}(x)}=1. $$

A special distribution class is called the extended regularly varying tailed distribution class, denoted by ERV. A distribution V on \((-\infty , \infty )\) belongs to the class \(\mathit{ERV}(- \alpha ,-\beta )\) for \(0\leq \alpha \leq \beta <\infty \) if for any \(y>1\),

$$ y^{-\beta }\leq \liminf_{x\to \infty }\frac{\overline{V}(xy)}{ \overline{V}(x)} \leq \limsup_{x\to \infty }\frac{\overline{V}(xy)}{ \overline{V}(x)}\leq y^{-\alpha }. $$

It is well known that the following relationships hold:

$$ \mathit{ERV}\subset \mathscr{C}\subset \mathscr{L}\cap \mathscr{D} \subset \mathscr{D}. $$

See, for example, Embrechts et al. [9], Foss et al. [10], and Denisov et al. [7]. For a distribution V on \((-\infty , \infty )\), denote its upper Matuszewska index by

$$\begin{aligned} J^{+}_{V}=-\lim_{y\to \infty } \frac{\log {\overline{V}}_{*}(y)}{ \log y},\quad \text{with } {\overline{V}}_{*}(y):= \liminf_{x\to \infty }\frac{{\overline{V}}(xy)}{{\overline{V}}(x)}, \quad \text{for } y>1. \end{aligned}$$

Another important parameter \(L_{V}\) is defined by

$$ L_{V}=\lim_{y\searrow 1}{\overline{V}}_{*}(y). $$

From Chap. 2.1 of Bingham et al. [2] we get the following equivalent statements:

$$ (\mathrm{i})\quad V\in \mathscr{D};\qquad (\mathrm{ii})\quad 0< L_{V}\leq 1; \qquad (\mathrm{iii})\quad J^{+}_{V}< \infty . $$

Dependence structures

In the studies of dependent risk models, many works consider a general dependence structure, namely upper tail asymptotically independent (UTAI).

Definition 1.1

We say that r.v.s \(\{\xi _{i},i\geq 1\}\) are upper tail asymptotically independent (UTAI) if for all \(i\ne j\geq 1\),

$$\begin{aligned} \lim_{\min \{x_{i},x_{j}\}\to \infty }P(\xi _{i}>x_{i}|\xi _{j}>x _{j})=0. \end{aligned}$$

The UTAI structure was proposed by Geluk and Tang [12] when they investigated the asymptotics of the tail of sums with dependent increments. There are some papers investigating the UTAI structure, such as Asimit et al. [1], Liu et al. [22], Gao and Liu [11], Li [20], and so on. In this paper, we consider a related dependence structure with the UTAI structure, which was introduced by He et al. [14] as follows:

$$\begin{aligned} \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq i< j\leq n}xP(\xi _{i}>x|\xi _{j}>x)=0 \quad \text{for all } \alpha >0. \end{aligned}$$
(1.2)

Another dependence structure is the widely (upper/lower) orthant dependent (WUOD/ WLOD) r.v.s, which was introduced by Wang et al. [29] when they investigated a dependent risk model.

Definition 1.2

For the r.v.s \(\{\xi _{i}, i\geq 1\}\), if there exists a sequence of positive numbers \(\{g_{U}(m), m\geq 1\}\) satisfying

$$\begin{aligned} P \Bigl(\mathop{\bigcap }_{i=1}^{m}\{\xi _{i}>x_{i}\} \Bigr)\leq g_{U}(m) \prod _{i=1}^{m}P(\xi _{i}>x_{i}) \end{aligned}$$

for each integer \(m\geq 1\) and all \(x_{i}\in (-\infty ,\infty )\), \(1\leq i\leq m\), then we say that the r.v.s \(\{\xi _{i}, i\geq 1\}\) are widely upper orthant dependent (WUOD) with dominating coefficients \(g_{U}(m)\), \(m\geq 1\); if there exists a sequence of positive numbers \(\{g_{L}(m), m\geq 1\}\) satisfying

$$\begin{aligned} P \Bigl(\mathop{\bigcap }_{i=1}^{m}\{\xi _{i}\leq x_{i}\} \Bigr)\leq g _{L}(m)\prod _{i=1}^{m}P(\xi _{i}\leq x_{i}) \end{aligned}$$

for each integer \(m\geq 1\) and all \(x_{i}\in (-\infty ,\infty )\), \(1\leq i\leq m\), then we say that the r.v.s \(\{\xi _{i}, i\geq 1\}\) are widely lower orthant dependent (WLOD) with dominating coefficients \(g_{L}(m)\), \(m\geq 1\); and if they are both WUOD and WLOD, then we say that the r.v.s \(\{\xi _{i}, i\geq 1\}\) are widely orthant dependent (WOD).

Obviously, if r.v.s \(\{X_{i},i\geq 1\}\) are WUOD or satisfy relation (1.2), then r.v.s \(\{X_{i},i\geq 1\}\) are following the UTAI structure. In addition, Liu et al. [22] noted that there exist random variables that are UTAI but do not satisfy WUOD structure; see Example 3.1 of Liu et al. [22]. He et al. [14] pointed out that if \(\{X_{i},i\geq 1 \}\) are identically distributed and WUOD random variables with finite means, then \(\{X_{i},i\geq 1\}\) satisfy (1.2).

In this paper, when investigating the asymptotic upper bound of the precise large deviations of the aggregate claims, we also consider the claim sizes that have a pairwise negatively quadrant dependence structure. This structure is stronger than the upper tail asymptotically independence structure.

Definition 1.3

We say that r.v.s \(\{\xi _{i},i\geq 1\}\) are pairwise negatively quadrant dependent (pairwise NQD) if for all real numbers x and y,

$$\begin{aligned} P(\xi _{i}>x, \xi _{j}>y)\leq P(\xi _{i}>x)P(\xi _{j}>y)\quad \text{for all } i\ne j\geq 1. \end{aligned}$$

The negatively quadrant dependence structure was introduced by Lehmann [19]. Many researchers have studied the negatively quadrant dependence structure, such as Ebrahimi and Ghosh [8], Block et al. [3], Chen and Ng [4], Yang et al. [32], and so on.

For the dependence structure between the interarrival times of accidents and the claim numbers, Liu et al. [21] adopted a general dependence structure via the conditional tail probability of the accident interarrival time given the claim number caused by the subsequent accidents being fixed, which was based on Chen and Yuen [5]. In the following, we briefly restate the definition of the above dependence structure between the interarrival times of accidents and the claim numbers and some other related quantities.

Assume that \(\{(\theta _{i}, Y_{i}),i\geq 1\}\) are i.i.d. We denote by \((\theta , Y)\) the generic r.v. of \(\{(\theta _{i}, Y_{i}),i\geq 1\}\) and assume that θ and Y are such that for all \(t\in [0, \infty )\) and any \(1\leq k\leq h\),

$$\begin{aligned} P(\theta >t|Y=k)\leq P\bigl(\theta ^{*}>t\bigr), \end{aligned}$$
(1.3)

where \(\theta ^{*}\) is a nonnegative r.v. independent of other sources of randomness.

Let \(\theta _{1}^{*}\) be a positive r.v. independent of all sources of randomness and such that for all \(t\in [0,\infty )\) and r any \(1\leq k\leq h\), \(P(\theta _{1}^{*}>t)=P(\theta _{1}>t|Y_{1}=k)\). Denote \(\tau _{1}^{*}=\theta _{1}^{*}\), \(\tau _{n}^{*}=\theta _{1}^{*}+\sum_{i=2} ^{n}\theta _{i}\), \(n\geq 2\), which constitute the delayed renewal counting process

$$\begin{aligned} N^{*}(t)=\sup \bigl\{ n\geq 1: \tau _{n}^{*} \leq t\bigr\} ,\quad t\geq 0. \end{aligned}$$
(1.4)

Then, for all integer \(n\geq 1\) and \(1\leq k\leq h\),

$$\begin{aligned} P\bigl(N^{*}(t)=n\bigr) =& P\bigl(\tau _{n}^{*} \leq t, \tau _{n+1}^{*}>t\bigr) \\ =& \int _{0}^{\infty }P \Biggl(\sum _{i=2}^{n}\theta _{i}\leq t-u, \sum _{i=2}^{n+1}\theta _{i}> t-u \Biggr) P\bigl(\theta _{1}^{*}\in du\bigr) \\ =& \int _{0}^{\infty }P \Biggl(\sum _{i=2}^{n}\theta _{i}\leq t-u, \sum _{i=2}^{n+1}\theta _{i}> t-u \Biggr) P(\theta _{1}\in du|Y _{1}=k) \\ =& P \Biggl(\sum_{i=1}^{n}\theta _{i}\leq t, \sum_{i=1} ^{n+1}\theta _{i}> t\Big|Y_{1}=k \Biggr) \\ =& P\bigl(N(t)=n|Y_{1}=k\bigr). \end{aligned}$$
(1.5)

Motivation and main results

It is well known that for the standard renewal risk model, when the claims sizes and interarrival times are all i.i.d. r.v.s, the precise large deviations for the aggregate claims and the ruin probability have been widely studied; see Klüppelberg and Stadtmüller [15], Tang [25], Wang [28], and Hao and Tang [13], among others. Actually, the independence assumption does not apply to most practical problems. Based on this situation, many researchers started to pay their attention to the risk model with dependent assumptions; we refer to Chen and Ng [4], Konstantinides and Loukissas [18], Wang et al. [29], Peng and Wang [23], Peng and Wang [24], Yang et al. [33], and some others.

There are many works investigating the compound renewal risk model. When the claim sizes are i.i.d. r.v.s with common distribution F, Tang et al. [26] investigated the large deviations for the aggregate claims with \(F\in \mathit{ERV}\); Konstantinides and Loukissas [17] also considered the i.i.d. claim sizes and extended the results of Tang et al. [26] from \(F\in \mathit{ERV}\) to \(F\in \mathscr{C}\); Zong [34] established an asymptotic formula for the finite-time ruin probability of compound renewal risk model in which claims sizes and interarrival times are all identically distributed but negatively dependent; Chen et al. [6] considered the claim sizes that are a sequence of negatively dependent heavy-tailed r.v.s with common distribution \(F\in \mathscr{C}\); Yang and Wang [31] investigated the precise large deviations for dependent random variables and assumed that the claim sizes are extended negatively dependent r.v.s with common distribution \(F\in \mathscr{D}\).

In the researches mentioned, the assumption of independence oft the interarrival times of the accidents and the corresponding claim numbers was always used. Recently, Liu et al. [21] considered the dependent case (1.3) between the interarrival times of the accidents and the corresponding claim numbers. They investigated the case where \(\{X_{ij}, i\geq 1, j\geq 1\}\) are nonnegative r.v.s with common distribution F, \(\{\theta _{i}, i \geq 1\}\) and \(\{Y_{i}, i\geq 1\}\) are all i.i.d. r.v.s, but for every \(i\geq 1\), \(\theta _{i}\) and \(Y_{i}\) follow the dependence structure (1.3). Liu et al. [21] obtained the asymptotic lower bound of the precise large deviations of the aggregate claims when \(F\in \mathscr{L}\cap \mathscr{D}\) and \(\{X_{ij},i\geq 1,j\geq 1\}\) satisfy the dependence structure (1.2). Strengthening the condition to that \(F\in \mathscr{C}\) and \(\{X_{ij},i\geq 1,j\geq 1\}\) are WUOD r.v.s with \(EX_{1}^{\beta }<\infty \) for some \(\beta >1\), they derived the asymptotic upper bound of the precise large deviations of the aggregate claims.

In this paper, we still consider the dependent case (1.3) between the interarrival times of the accidents and the corresponding claim numbers and investigate the precise large deviations of the aggregate claims (1.1). We mainly study the case where the claim sizes caused by the different accidents are dependent and have different distributions belonging to the dominated variation distribution class. To proceed, we impose the following assumptions.

Assumption 1.1

All the claim sizes \(\{X_{ij},j\geq 1,i\geq 1\}\) are nonnegative r.v.s satisfying relation (1.2), that is, for any \(\alpha >0\),

$$\begin{aligned} \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq i\leq n}\sup_{1\leq j< l\leq h} xP(X_{ij}>x|X _{il}>x)=0 \end{aligned}$$

and

$$\begin{aligned} \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq i\ne k\leq n}\sup_{1\leq j\leq h, 1\leq l\leq h} xP(X_{ij}>x|X_{kl}>x)=0. \end{aligned}$$

For each \(i\geq 1\), \(\{X_{ij},j\geq 1\}\) are identically distributed r.v.s with common distribution \(F_{i}\).

Assumption 1.2

The interarrival times of the accidents \(\{\theta _{i},i\geq 1\}\) are positive r.v.s with finite mean \(\lambda ^{-1}\); the claim numbers \(\{Y_{i},i\geq 1\}\) are a sequence of positive integer-valued r.v.s with finite mean ν; \(\{(\theta _{i}, Y_{i}), i\geq 1\}\) is a sequence of i.i.d. r.v.s with generic r.v. \((\theta , Y)\) satisfying the dependence structure (1.3). In addition, \(\{X_{ij},i\geq 1,j\geq 1\}\) are independent of \(\{\theta _{i},i\geq 1\}\) and \(\{Y_{i},i\geq 1\}\).

Firstly, we investigate the asymptotic lower bound of the precise large deviations for the aggregate claims. For this case, we will use the following assumption on the distributions \(F_{i}\), \(i\geq 1\).

Assumption 1.3

There exists a distribution F such that the distributions \(F_{i}\), \(i\geq 1\), satisfy the relation

$$\begin{aligned} 0< M_{L}:=\liminf_{x\to \infty }\inf_{i\geq 1} \frac{\overline{F _{i}}(x)}{\overline{F}(x)} \leq \limsup_{x\to \infty }\sup _{i\geq 1}\frac{\overline{F_{i}}(x)}{\overline{F}(x)}=:M_{U}< \infty . \end{aligned}$$

From Assumption 1.3 we know that if \(F_{i}\in \mathscr{D}\), \(i\geq 1\), then \(F\in \mathscr{D}\), and for any \(y>1\) and \(i\geq 1\),

$$\begin{aligned} \frac{M_{L}}{M_{U}}\overline{F_{i*}}(y)\leq \overline{F_{*}}(y) \leq \frac{M _{U}}{M_{L}}\overline{F_{i*}}(y). \end{aligned}$$

Therefore we have that \(J_{F}^{+}=J_{F_{i}}^{+}\) and \(\frac{M_{L}}{M _{U}}L_{F_{i}}\leq L_{F}\leq \frac{M_{U}}{M_{L}}L_{F_{i}}\) for all \(i\geq 1\). Under Assumptions 1.1, 1.2, and 1.3, we can obtain the asymptotic lower bound of the precise large deviations of the aggregate claims.

Theorem 1.1

Consider the aggregate claims (1.1). Suppose that Assumptions 1.1, 1.2, and 1.3 are satisfied. If \(F_{i}\in \mathscr{D}\), \(i\geq 1\), then for any \(\gamma >0\),

$$\begin{aligned} P\bigl(S(t)>x\bigr)\gtrsim \nu \sum_{i=1}^{\lambda t}L_{F_{i}} \overline{F}_{i}(x) \end{aligned}$$
(1.6)

uniformly for all \(x\geq \gamma t\), which is equivalent to

$$ \liminf_{t\to \infty }\inf_{x\geq \gamma t} \frac{P(S(t)>x)}{ \nu \sum_{i=1}^{\lambda t}L_{F_{i}}\overline{F}_{i}(x)}\geq 1. $$

When \(\{X_{ij},i\geq 1,j\geq 1\}\) are identically distributed r.v.s, we obtain the following corollary directly from Theorem 1.1.

Corollary 1.1

Consider the aggregate claims (1.1). Suppose that Assumption 1.2 is satisfied and the claim sizes \(\{X_{ij},i\geq 1,j\geq 1 \}\), are nonnegative r.v.s with common distribution F satisfying Assumption 1.1. If \(F\in \mathscr{D}\), then for any \(\gamma >0\),

$$\begin{aligned} P\bigl(S(t)>x\bigr)\gtrsim \nu \lambda tL_{F}\overline{F}(x) \end{aligned}$$

uniformly for all \(x\geq \gamma t\), which is equivalent to

$$ \liminf_{t\to \infty }\inf_{x\geq \gamma t} \frac{P(S(t)>x)}{ \nu \lambda tL_{F}\overline{F}(x)}\geq 1. $$

Next, we will investigate the asymptotic upper bound of the precise large deviations for the aggregate claims. In this case, we consider the case where the claim sizes \(\{X_{ij},i\geq 1,j\geq 1\}\) follow the pairwise negatively quadrant dependence structure.

Assumption 1.4

All the claim sizes \(\{X_{ij},j\geq 1,i\geq 1\}\) are nonnegative r.v.s and follow the pairwise negatively quadrant dependence structure, that is, for all \(x\geq 0\) and \(y\geq 0\),

$$\begin{aligned} P(X_{ij}>x, X_{il}>y)\leq P(X_{ij}>x)P(X_{il}>y) \quad \text{for all } i, j, l\geq 1, j< l \end{aligned}$$

and

$$\begin{aligned} P(X_{ij}>x, X_{kl}>y)\leq P(X_{ij}>x)P(X_{kl}>y) \quad \text{for all } i, j, k, l\geq 1, i\ne k. \end{aligned}$$

For each \(i\geq 1\), \(\{X_{ij},j\geq 1\}\) are identically distributed r.v.s with common distribution \(F_{i}\).

Furthermore, we will use the following assumption, which was given by Yang and Wang [31] when they investigated the precise large deviations for extendedly negatively dependent random variables. Wang et al. [30] also used this assumption when they studied the precise large deviations for widely orthant dependent random variables.

Assumption 1.5

For all \(i\geq 1\), \(F_{i}\in \mathscr{D}\). Furthermore, assume that for any \(\varepsilon >0\), there exist some \(w_{1}=w_{1}(\varepsilon )>1\) and \(x_{1}=x_{1}(\varepsilon )>0\), irrespective to i, such that for all \(i\geq 1\), \(1\leq w\leq w_{1}\), and \(x\geq x_{1}\),

$$ \frac{\overline{F_{i}}(wx)}{\overline{F_{i}}(x)}\geq L_{F_{i}}-\varepsilon , $$

or, equivalently, for any \(\varepsilon >0\), there exist some \(0< w_{2}=w_{2}(\varepsilon )<1\) and \(x_{2}=x_{2}(\varepsilon )>0\), irrespective to i, such that for all \(i\geq 1\), \(w_{2}\leq w\leq 1\), and \(x\geq x_{2}\),

$$ \frac{\overline{F_{i}}(wx)}{\overline{F_{i}}(x)}\leq L_{F_{i}}^{-1}+ \varepsilon . $$

As noted in Remark 1(ii) of Wang et al. [30] Assumption 1.5 actually requires the distributions of \(X_{ij}\), \(i\geq 1\), \(j\geq 1\), do not differ too much from each other. In particular, if there exists a positive integer \(i_{0}\) such that \(F_{i}=F_{i_{0}}\) for all \(i\geq i_{0}\), then since \(F_{i_{0}}\in \mathscr{D}\), we know that Assumption 1.5 is satisfied.

We will derive the asymptotic upper bound of the precise large deviations of the aggregate claims under Assumptions 1.2, 1.3, 1.4, and 1.5.

Theorem 1.2

Consider the aggregate claims (1.1). Suppose that Assumptions 1.2, 1.3, 1.4, and 1.5 are satisfied and \(EX_{ij}^{2p+1}<\infty \), \(i, j\geq 1\), for some \(p>J_{F}^{+}\). Then there exist constants \(\gamma _{1}>0\) and \(c_{1}>0\) such that for any \(\gamma \geq \gamma _{1}\),

$$\begin{aligned} P\bigl(S(t)>x\bigr)\lesssim c_{1} \nu \sum _{i=1}^{\lambda t}L_{F_{i}}^{-1} \overline{F}_{i}(x) \end{aligned}$$
(1.7)

uniformly for all \(x\geq \gamma t\), which is equivalent to

$$ \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{P(S(t)>x)}{\nu \sum_{i=1}^{\lambda t}L_{F_{i}}^{-1}\overline{F}_{i}(x)}\leq c_{1}. $$

If \(\{X_{ij},i\geq 1,j\geq 1\}\) are identically distributed r.v.s with common distribution \(F\in \mathscr{D}\), then Assumptions 1.3 and 1.5 are satisfied. Thus from Theorem 1.2 we can obtain the following corollary.

Corollary 1.2

Consider the aggregate claims (1.1). Suppose that Assumption 1.2 is satisfied and the claim sizes \(\{X_{ij},i\geq 1,j\geq 1\}\) are nonnegative r.v.s with common distribution F satisfying Assumptions 1.4 and \(EX_{ij}^{2p+1}<\infty \), \(i, j\geq 1\), for some \(p>J_{F}^{+}\). If \(F\in \mathscr{D}\), then there exist constants \(\gamma _{1}>0\) and \(c_{1}>0\) such that for any \(\gamma \geq \gamma _{1}\),

$$\begin{aligned} P\bigl(S(t)>x\bigr)\lesssim c_{1}\nu \lambda tL_{F}^{-1} \overline{F}(x) \end{aligned}$$

uniformly for all \(x\geq \gamma t\), which is equivalent to

$$ \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{P(S(t)>x)}{\nu \lambda tL_{F}^{-1}\overline{F}(x)}\leq c_{1}. $$

The rest of the paper is organized as follows. Section 2 includes some lemmas. In Sect. 3, we collect the proofs of our main results.

Some lemmas

This section presents some lemmas, which are useful in proving the main results of this paper. The following lemma can obtained from Proposition 2.2.1 of Bingham et al. [2] and Lemma 3.5 of Tang and Tsitsiashvili [27].

Lemma 2.1

If \(V\in \mathscr{D}\), then

  1. (1)

    for each \(\rho >J_{V}^{+}\), there exist positive constants A and B such that

    $$ \frac{\overline{V}(y)}{\overline{V}(x)}\leq A \biggl(\frac{x}{y} \biggr) ^{\rho } $$

    for all \(x\geq y\geq B\);

  2. (2)

    for each \(\rho >J_{V}^{+}\),

    $$ x^{-\rho }=o\bigl(\overline{V}(x)\bigr). $$

Consider the aggregate claims (1.1). For the delayed renewal counting process in (1.4), we have the following conclusion.

Lemma 2.2

In addition to (1.3), assume that \(E\theta =\lambda ^{-1}>0\). Then for every \(0<\delta <1\),

$$ \lim_{t\to \infty }\sup_{1\leq k\leq h}P \biggl( \biggl\vert \frac{N ^{*}(t)}{\lambda t}-1 \biggr\vert >\delta \biggr)=0. $$

The proof of this lemma is similar to that of Lemma 2.1 of Chen and Yuen [5]. For the completeness of the proof, we give the following proof with some modifications.

Proof

For all \(1\leq k\leq h\) and \(t>0\),

$$\begin{aligned} P \biggl( \biggl\vert \frac{N^{*}(t)}{\lambda t}-1 \biggr\vert >\delta \biggr) \leq & P \bigl(N^{*}(t)>\lambda t(1+\delta ) \bigr)+P \bigl(N^{*}(t) \leq \lambda t(1-\delta ) \bigr) \\ \leq & P \bigl(\tau _{\lfloor \lambda t(1+\delta )\rfloor }^{*}\leq t \bigr) +P \bigl( \tau _{\lfloor \lambda t(1-\delta )\rfloor +1}^{*}> t \bigr) \\ =& P \Biggl(\theta _{1}^{*}+\sum _{i=2}^{\lfloor \lambda t(1+ \delta )\rfloor }\theta _{i}\leq t \Biggr) +P \Biggl(\theta _{1}^{*}+ \sum _{i=2}^{\lfloor \lambda t(1-\delta )\rfloor +1}\theta _{i}> t \Biggr) \\ \leq & P \Biggl(\sum_{i=2}^{\lfloor \lambda t(1+\delta )\rfloor }\theta _{i}\leq t \Biggr) +P \Biggl(\theta ^{*}+\sum _{i=2}^{ \lfloor \lambda t(1-\delta )\rfloor +1}\theta _{i}> t \Biggr), \end{aligned}$$

where in the last step, we used (1.3) and \(P(\theta _{1}^{*}>t)=P( \theta _{1}>t|Y_{1}=k)\) for any \(1\leq k\leq h\). By the law of large numbers for the partial sums \(\sum_{i=1}^{n}\theta _{i}\), \(n\geq 1\), we have

$$\begin{aligned} \lim_{t\to \infty }{\bigl\lfloor \lambda t(1+\delta )\bigr\rfloor }^{-1} \sum_{i=2}^{\lfloor \lambda t(1+\delta )\rfloor }\theta _{i} = \lambda ^{-1}\quad \text{a.s.} \end{aligned}$$

Thus

$$\begin{aligned} \lim_{t\to \infty }t^{-1}\sum _{i=2}^{\lfloor \lambda t(1+ \delta )\rfloor }\theta _{i} =1+\delta \quad \text{a.s.} \end{aligned}$$

Similarly,

$$\begin{aligned} \lim_{t\to \infty }t^{-1}\sum _{i=2}^{\lfloor \lambda t(1- \delta )\rfloor +1}\theta _{i} =1-\delta \quad \text{a.s.} \end{aligned}$$

Therefore

$$\begin{aligned} \lim_{t\to \infty }P \Biggl(\sum_{i=2}^{\lfloor \lambda t(1+\delta )\rfloor } \theta _{i}\leq t \Biggr)=0 \end{aligned}$$

and

$$\begin{aligned} \lim_{t\to \infty } P \Biggl(\sum_{i=2}^{\lfloor \lambda t(1-\delta )\rfloor +1} \theta _{i}> \biggl(1-\frac{\delta }{2} \biggr)t \Biggr)=0. \end{aligned}$$

Since for all \(t>0\),

$$\begin{aligned} &P \Biggl(\theta ^{*}+\sum_{i=2}^{\lfloor \lambda t(1-\delta ) \rfloor +1} \theta _{i}> t \Biggr) \\ &\quad = P \Biggl(\theta ^{*}+\sum_{i=2}^{\lfloor \lambda t(1-\delta )\rfloor +1} \theta _{i}> t, 0< \theta ^{*}\leq \frac{\delta }{2}t \Biggr) +P \Biggl(\theta ^{*}+\sum_{i=2}^{\lfloor \lambda t(1-\delta ) \rfloor +1} \theta _{i}> t, \theta ^{*}> \frac{\delta }{2}t \Biggr) \\ &\quad \leq P \Biggl(\sum_{i=2}^{\lfloor \lambda t(1-\delta )\rfloor +1} \theta _{i}> \biggl(1-\frac{\delta }{2} \biggr)t \Biggr)+ P \biggl(\theta ^{*}> \frac{\delta }{2}t \biggr), \end{aligned}$$

we have

$$\begin{aligned} \lim_{t\to \infty }P \Biggl(\theta ^{*}+\sum _{i=2}^{ \lfloor \lambda t(1-\delta )\rfloor +1}\theta _{i}> t \Biggr)=0. \end{aligned}$$

This completes the proof of Lemma 2.2. □

The following lemma gives asymptotic lower and upper bounds for the tail of the partial sums with UTAI increments.

Lemma 2.3

Let n be any positive integer, and let \(\{\xi _{k}, 1\leq k\leq n\}\) be nonnegative and UTAI r.v.s with distributions \(V_{k}\), \(1\leq k\leq n\), respectively. If \(V_{k}\in \mathscr{D}\), \(1\leq k\leq n\), then, as \(x\to \infty \), we have

$$ \sum_{k=1}^{n}L_{V_{k}} \overline{V_{k}}(x)\lesssim P \Biggl(\sum _{k=1}^{n}\xi _{k}>x \Biggr)\lesssim \sum_{k=1}^{n}L_{V _{k}}^{-1} \overline{V_{k}}(x). $$

Proof

When \(n=1\), by \(V_{1}\in \mathscr{D}\) we know that Lemma 2.3 obviously holds. We further assume that \(n\geq 2\). Denote \(S_{n}=\sum_{k=1}^{n}\xi _{k}\). Firstly, we estimate an asymptotic upper bound for \(P(S_{n}>x)\). For arbitrarily fixed \(0<\varepsilon <1\) and any \(x>0\),

$$\begin{aligned} P(S_{n}>x) \leq &P \Bigl(\mathop{\bigcup } _{k=1}^{n}\bigl(\xi _{k}>(1- \varepsilon )x \bigr) \Bigr) +P \Bigl(S_{n}>x,\mathop{\bigcap } _{k=1} ^{n}\bigl(\xi _{k}\leq (1-\varepsilon )x\bigr) \Bigr) \\ =:&I_{1}(x)+I_{2}(x). \end{aligned}$$
(2.1)

For \(I_{1}(x)\), we have that for any \(x>0\),

$$\begin{aligned} I_{1}(x)\leq \sum_{k=1}^{n} \overline{V_{k}}\bigl((1-\varepsilon )x\bigr). \end{aligned}$$

For any \(1\leq k\leq n\), from the definition of \(L_{V_{k}}\) we have

$$\begin{aligned} \lim_{y\uparrow 1}\limsup_{x\to \infty } \frac{\overline{V_{k}}(xy)}{\overline{V _{k}}(x)} = \biggl(\lim_{y\downarrow 1}\liminf _{x\to \infty }\frac{\overline{V _{k}}(xy\cdot y^{-1})}{\overline{V_{k}}(xy)} \biggr)^{-1}=L_{V_{k}} ^{-1}. \end{aligned}$$

Thus, for any \(\varepsilon _{1}>0\), there exist some \(0<\delta _{1}= \delta _{1}(\varepsilon _{1}, n)<1\) and \(x_{1}=x_{1}(\varepsilon _{1}, n)>0\) such that for all \(\delta _{1}\leq y\leq 1\), \(x\geq x_{1}\), and \(1\leq k\leq n\),

$$\begin{aligned} \frac{\overline{V_{k}}(xy)}{\overline{V_{k}}(x)}\leq L_{V_{k}}^{-1}+ \varepsilon _{1} \end{aligned}$$
(2.2)

and for all \(1\leq i\ne j\leq n\) and \(x\geq x_{1}\),

$$\begin{aligned} P \biggl(\xi _{j}>\frac{\varepsilon x}{n-1}|\xi _{i}> \frac{x}{n} \biggr)\leq \frac{\varepsilon _{1}}{n}. \end{aligned}$$
(2.3)

Therefore, for the above \(\varepsilon _{1}\), taking \(0<\varepsilon \leq 1-\delta _{1}\) in (2.1), since \(0< L_{V_{k}}\leq 1\), \(1\leq k\leq n\), by (2.2), for all \(x\geq x_{1}\), we have

$$\begin{aligned} I_{1}(x) \leq & \sum_{k=1}^{n} \bigl(L_{V_{k}}^{-1}+\varepsilon _{1} \bigr) \overline{V _{k}}(x) \\ \leq & (1+\varepsilon _{1})\sum_{k=1}^{n}L_{V_{k}}^{-1} \overline{V_{k}}(x). \end{aligned}$$

Hence

$$\begin{aligned} \limsup_{x\to \infty }\frac{I_{1}(x)}{\sum_{k=1}^{n}L_{V_{k}}^{-1}\overline{V_{k}}(x)}\leq \lim _{\varepsilon _{1}\downarrow 0}(1+\varepsilon _{1})=1. \end{aligned}$$
(2.4)

For \(I_{2}(x)\), by (2.3) we have that, for \(x>x_{1}\),

$$\begin{aligned} I_{2}(x) \leq &\sum_{i=1}^{n}P \biggl(S_{n}>x,\xi _{i}>\frac{x}{n},\mathop{ \bigcap } _{k=1}^{n}\bigl(\xi _{k} \leq (1-\varepsilon )x\bigr) \biggr) \\ \leq &\sum_{i=1}^{n}P \biggl(\xi _{i}>\frac{x}{n}, S_{n}-\xi _{i}> \varepsilon x \biggr) \\ \leq &\sum_{i=1}^{n}\sum _{1\leq j\ne i\leq n}P \biggl(\xi _{i}>\frac{x}{n}, \xi _{j}>\frac{\varepsilon x}{n-1} \biggr) \\ =&\sum_{i=1}^{n}\sum _{1\leq j\ne i\leq n}P \biggl(\xi _{j}>\frac{\varepsilon x}{n-1}| \xi _{i}>\frac{x}{n} \biggr)P \biggl(\xi _{i}> \frac{x}{n} \biggr) \\ \leq & \varepsilon _{1}\sum_{k=1}^{n} \overline{V_{k}} \biggl(\frac{x}{n} \biggr). \end{aligned}$$
(2.5)

Since \(V_{k}\in \mathscr{D}\), \(1\leq k\leq n\), by Lemma 2.1(1), for any \(\rho >\max {\{J_{V_{k}}^{+}, 1\leq k\leq n\}}\), there exist constants \(A>0\) and \(B\geq x_{1}\) such that for all \(1\leq k\leq n\) and \(x>y\geq B\),

$$\begin{aligned} \frac{\overline{V_{k}}(y)}{\overline{V_{k}}(x)}\leq A \biggl(\frac{x}{y} \biggr)^{\rho }. \end{aligned}$$
(2.6)

It follows from (2.5) and (2.6) that for all \(x\geq nB\),

$$\begin{aligned} I_{2}(x)\leq \varepsilon _{1}An^{\rho }\sum _{k=1}^{n}\overline{V_{k}}(x) \leq \varepsilon _{1}An^{\rho }\sum _{k=1}^{n}L_{V_{k}}^{-1} \overline{V_{k}}(x). \end{aligned}$$

Therefore

$$\begin{aligned} \limsup_{x\to \infty }\frac{I_{2}(x)}{\sum_{k=1}^{n}L_{V_{k}}^{-1}\overline{V_{k}}(x)}\leq \lim _{\varepsilon _{1}\downarrow 0}\varepsilon _{1}An^{\rho }=0. \end{aligned}$$
(2.7)

By (2.1), (2.4), and (2.7) we get that

$$ P(S_{n}>x)\lesssim \sum_{k=1}^{n}L_{V_{k}}^{-1} \overline{V_{k}}(x). $$

Now we prove the asymptotic lower bound for \(P(S_{n}>x)\). Since \(\{ \xi _{i}, 1\leq i\leq n \} \) are UTAI, for any \(0<\varepsilon _{2}<1\), there exists \(x_{2}=x_{2}(\varepsilon _{2}, n)>0\) such that for all \(1\leq i\ne j\leq n\) and \(x> x_{2}\),

$$\begin{aligned} P (\xi _{i}>x|\xi _{j}>x )\leq \frac{\varepsilon _{2}}{n}. \end{aligned}$$

Hence, for all \(x> x_{2}\),

$$\begin{aligned} \sum_{1\leq i< j\leq n}P(\xi _{i}>x,\xi _{j}>x) =& \sum_{1\leq i< j\leq n}P(\xi _{i}>x|\xi _{j}>x)P(\xi _{j}>x) \\ \leq & \varepsilon _{2}\sum_{k=1}^{n} \overline{V_{k}}(x). \end{aligned}$$

Therefore, since \(\{\xi _{k}, 1\leq k\leq n\}\) are nonnegative, for all \(x>x_{2}\),

$$\begin{aligned} P(S_{n}>x) \geq & P \Biggl(\bigcup_{k=1}^{n} \{\xi _{k}>x\} \Biggr) \\ \geq & \sum_{k=1}^{n} \overline{V_{k}}(x)-\sum_{1\leq i< j\leq n}P(\xi _{i}>x,\xi _{j}>x) \\ \geq & (1-\varepsilon _{2})\sum_{k=1}^{n} \overline{V_{k}}(x), \end{aligned}$$

which means that

$$\begin{aligned} \liminf_{x\to \infty }\frac{P(S_{n}>x)}{\sum_{k=1}^{n}\overline{V_{k}}(x)}\geq \lim _{\varepsilon _{2}\downarrow 0}(1-\varepsilon _{2})=1. \end{aligned}$$

Since \(0< L_{V_{k}}\leq 1\), \(1\leq k\leq n\), we have

$$ P(S_{n}>x)\gtrsim \sum_{k=1}^{n} \overline{V_{k}}(x)\geq \sum_{k=1}^{n}L_{V_{k}} \overline{V_{k}}(x). $$

This completes the proof of the lemma. □

The following lemma presents an upper bound of the tail of the partial sums with dominated variation increments.

Lemma 2.4

Let \(\{\xi _{k}, k\geq 1\}\) be a sequence of real-valued r.v.s with distributions \(V_{k}\in \mathscr{D}\), \(k\geq 1\), respectively. Suppose that there exists a distribution V satisfying the relation

$$\begin{aligned} 0< M_{L}:=\liminf_{x\to \infty }\inf_{i\geq 1} \frac{\overline{V _{i}}(x)}{\overline{V}(x)} \leq \limsup_{x\to \infty }\sup _{i\geq 1}\frac{\overline{V_{i}}(x)}{\overline{V}(x)}=:M_{U}< \infty . \end{aligned}$$
(2.8)

Then for any \(p>J_{V}^{+}\), there exists a constant \(M>0\) such that for all \(x\geq 0\) and \(n\geq 1\),

$$ P \Biggl(\sum_{k=1}^{n}\xi _{k}>x \Biggr)\leq Mn^{p}\sum _{k=1} ^{n}\overline{V_{k}}(x). $$

Proof

Since \(V_{k}\in \mathscr{D}\), \(k\geq 1\), by (2.8) we know that \(V\in \mathscr{D}\). Hence by Lemma 2.1(2) and (2.8) we get that for any \(p>J_{V}^{+}\),

$$\begin{aligned} \lim_{x\to \infty }\sup_{k\geq 1} \frac{x^{-p}}{\overline{V _{k}}(x)} =\lim_{x\to \infty }\frac{x^{-p}}{\overline{V}(x)} \cdot \limsup_{x\to \infty }\sup_{k\geq 1} \frac{ \overline{V}(x)}{\overline{V_{k}}(x)}=0. \end{aligned}$$

Thus, for some constant \(C_{1}>0\), there exists \(\varepsilon >0\) such that for all \(x\geq \varepsilon \) and \(k\geq 1\),

$$\begin{aligned} \frac{x^{-p}}{\overline{V_{k}}(x)}\leq C_{1}. \end{aligned}$$
(2.9)

Since \(V\in \mathscr{D}\), by Lemma 2.1(1), for any \(p>J_{V}^{+}\), there exist constants A and B such that for all \(x\geq \frac{x}{n}\geq B\),

$$\begin{aligned} \frac{\overline{V}(\frac{x}{n})}{\overline{V}(x)}\leq An^{p}. \end{aligned}$$

By (2.8) there exists a constant \(B_{1}>B\) such that for all \(x\geq B_{1}\) and \(k\geq 1\),

$$\begin{aligned} \frac{1}{2}M_{L}\leq \frac{\overline{V_{k}}(x)}{\overline{V}(x)} \leq 2M_{U}. \end{aligned}$$

Thus, for all \(x\geq \frac{x}{n}\geq B_{1}\),

$$\begin{aligned} \frac{\overline{V_{k}}(x/n)}{\overline{V_{k}}(x)}=\frac{\overline{V _{k}}(x/n)}{\overline{V}(x/n)} \cdot \frac{\overline{V}(x/n)}{ \overline{V}(x)}\cdot \frac{\overline{V}(x)}{\overline{V_{k}}(x)} \leq 4\frac{M_{U}}{M_{L}}\cdot An^{p}=:C_{2}n^{p}. \end{aligned}$$
(2.10)

By (2.9) and (2.10), for all \(x\geq \varepsilon \), \(k\geq 1\), and \(n\geq 1\), we have

$$\begin{aligned} \overline{V_{k}}(x/n) \leq & \mathbf{1}_{\{\varepsilon \leq x\leq nB _{1}\}}+ \overline{V_{k}}(x/n)\mathbf{1}_{\{x\geq nB_{1}\}} \\ \leq & (nB_{1}/x)^{p}+C_{2}n^{p} \overline{V_{k}}(x) \\ \leq & (nB_{1})^{p}C_{1} \overline{V_{k}}(x)+C_{2}n^{p}\overline{V _{k}}(x) \\ =& \bigl(B_{1}^{p}C_{1}+C_{2} \bigr)n^{p}\overline{V_{k}}(x). \end{aligned}$$
(2.11)

Thus for all \(x>\varepsilon \) and \(n\geq 1\),

$$\begin{aligned} P \Biggl(\sum_{k=1}^{n}\xi _{k}>x \Biggr)\leq \sum_{k=1}^{n}P \biggl(\xi _{k}>\frac{x}{n} \biggr)\leq \bigl(B_{1}^{p}C_{1}+C_{2} \bigr)n^{p} \sum_{k=1}^{n} \overline{V_{k}}(x). \end{aligned}$$
(2.12)

When \(0\leq x\leq \varepsilon \), for p in (2.9), we have that for all \(n\geq 1\),

$$\begin{aligned} P \Biggl(\sum_{k=1}^{n}\xi _{k}>x \Biggr)\leq \frac{ \overline{V_{1}}(x)}{\overline{V_{1}}(\varepsilon )} = \varepsilon ^{p}\cdot \frac{\varepsilon ^{-p}}{\overline{V_{1}}(\varepsilon )} \cdot \overline{V_{1}}(x) \leq C_{1}\varepsilon ^{p}n^{p}\sum _{k=1}^{n}\overline{V_{k}}(x). \end{aligned}$$
(2.13)

It follows from (2.12) and (2.13) that for all \(x\geq 0\) and \(n\geq 1\),

$$\begin{aligned} P \Biggl(\sum_{k=1}^{n}\xi _{k}>x \Biggr)\leq Mn^{p}\sum _{k=1} ^{n}\overline{V_{k}}(x), \end{aligned}$$

where \(M=\max \{ B_{1}^{p}C_{1}+C_{2}, C_{1}\varepsilon ^{p} \} \). This completes the proof of the lemma. □

The following lemma shows the closure property of r.v.s satisfying (1.2).

Lemma 2.5

Consider the aggregate claims (1.1). Suppose that Assumptions 1.1 and 1.3 are satisfied. Also assume that \(F_{i} \in \mathscr{D}\), \(i\geq 1\). If the sequences \(\{Y_{i}, i\geq 1\}\) and \(\{X_{ij}, i\geq 1, j\geq 1\}\) are mutually independent, then \(\{ Z_{i}=\sum_{j=1}^{Y_{i}}X_{ij}, i\geq 1 \} \) also satisfy relation (1.2).

Proof

For any integer number \(n\geq 2\), \(1\leq i< j\leq n\), and \(x>0\), we have that

$$\begin{aligned} &P(Z_{i}>x, Z_{j}>x) \\ &\quad = P \Biggl(\sum_{k=1}^{Y_{i}}X_{ik}>x, \sum_{l=1}^{Y_{j}}X_{jl}>x \Biggr) \\ &\quad \leq \sum_{k_{1}=1}^{h}\sum _{k_{2}=1}^{h}P \Biggl(\sum _{k=1}^{k_{1}}X _{ik}>x, \sum _{l=1}^{k_{2}}X_{jl}>x \Biggr)P(Y_{i}=k_{1})P(Y_{j}=k _{2}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\sum _{k_{2}=1}^{h}\sum _{k=1}^{k_{1}}\sum_{l=1}^{k_{2}} P \biggl(X_{ik}>\frac{x}{k_{1}k_{2}}, X_{jl}> \frac{x}{k _{1}k_{2}} \biggr)P(Y_{i}=k_{1})P(Y_{j}=k_{2}). \end{aligned}$$
(2.14)

By (2.11) we know that for any \(p>J_{F}^{+}\), there exist constants \(C_{3}>0\) and \(\varepsilon >0\) such that for all \(j\geq 1\), \(k_{i}\geq 1\), \(i=1,2\), \(k\geq 1\), and \(x>\varepsilon \), we have that

$$\begin{aligned} P \biggl(X_{jk}>\frac{x}{k_{1}k_{2}} \biggr)\leq C_{3}k_{1}^{p}k_{2} ^{p}P (X_{jk}>x ). \end{aligned}$$

So, for the above \(p>J_{F}^{+}\), we have that for all \(j\geq 1\), \(k_{i}\geq 1\), \(i=1,2\), \(k\geq 1\), and \(x>\varepsilon \),

$$\begin{aligned} P(Z_{j}>x)=P \Biggl(\sum_{k=1}^{Y_{j}}X_{jk}>x \Biggr)\geq P (X _{jk}>x ) \geq \frac{1}{C_{3}k_{1}^{p}k_{2}^{p}}P \biggl(X_{jk}>\frac{x}{k _{1}k_{2}} \biggr). \end{aligned}$$
(2.15)

Therefore by (2.14) and (2.15), for the above \(p>J_{F}^{+}\) and any \(\alpha >0\), we have that

$$\begin{aligned} & \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq i< j\leq n}xP(Z _{i}>x|Z_{j}>x) \\ &\quad = \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq i< j\leq n}\frac{xP(Z _{i}>x, Z_{j}>x)}{P(Z_{j}>x)} \\ &\quad \leq \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq i< j\leq n} \sum_{k_{1}=1}^{h} \sum_{k_{2}=1}^{h}\sum _{k=1}^{k_{1}}\sum_{l=1}^{k _{2}} \frac{xP (X_{ik}>\frac{x}{k_{1}k_{2}}, X_{jl}>\frac{x}{k _{1}k_{2}} )}{P(Z_{j}>x)}P(Y_{i}=k_{1})P(Y_{j}=k_{2}) \\ &\quad \leq C_{3}\lim_{n\to \infty }\sup _{x\geq \alpha n} \sup_{1\leq i< j\leq n}\sum _{k_{1}=1}^{h}\sum_{k_{2}=1}^{h} \sum_{k=1} ^{k_{1}}\sum _{l=1}^{k_{2}} \frac{xk_{1}^{p}k_{2}^{p}P (X_{ik}>\frac{x}{k _{1}k_{2}}, X_{jl}>\frac{x}{k_{1}k_{2}} )}{P (X_{jl}>\frac{x}{k _{1}k_{2}} )} \\ &\qquad {}\cdot P(Y_{i}=k_{1})P(Y_{j}=k_{2}) \\ &\quad = C_{3}\lim_{n\to \infty }\sup _{x\geq \alpha n}\sup_{1\leq i< j \leq n}\sum _{k_{1}=1}^{h}\sum_{k_{2}=1}^{h} \sum_{k=1}^{k_{1}}\sum _{l=1} ^{k_{2}} k_{1}^{p}k_{2}^{p}xP \biggl(X_{ik}>\frac{x}{k_{1}k_{2}}\Big|X _{jl}> \frac{x}{k_{1}k_{2}} \biggr) \\ &\qquad{} \cdot P(Y_{i}=k_{1})P(Y_{j}=k_{2}) \\ &\quad \leq C_{3}\sum_{k_{1}=1}^{h} \sum_{k_{2}=1}^{h}\sum _{k=1}^{k_{1}} \sum_{l=1}^{k_{2}}k_{1}^{p}k_{2}^{p} \lim_{n\to \infty } \sup_{x\geq \alpha n}\sup _{1\leq i< j\leq n} \sup_{1\leq k\leq h, 1\leq l\leq h} xP \biggl(X_{ik}>\frac{x}{k_{1}k _{2}}\Big|X_{jl}> \frac{x}{k_{1}k_{2}} \biggr) \\ &\qquad {}\cdot P(Y_{i}=k_{1})P(Y_{j}=k_{2}) \\ &\quad = 0, \end{aligned}$$

where in the last step, we used Assumption 1.1. This completes the proof of the lemma. □

Consider the compound renewal risk model mentioned in the introduction and let

$$\begin{aligned} S_{n}=\sum_{i=1}^{n}\sum _{j=1}^{Y_{i}}X_{ij}, n\geq 1, \end{aligned}$$
(2.16)

which express the aggregate claims caused by the first n accidents. The following lemma gives the asymptotic upper bound for the precise large deviations of \(S_{n}\) as \(n\to \infty \).

Lemma 2.6

Consider the aggregate claims (2.16). Suppose that Assumptions 1.2, 1.3, 1.4, and 1.5 are satisfied and \(EX_{ij}^{2p+1}<\infty \), \(i, j\geq 1\), for some \(p>J_{F}^{+}\). Then there exist constants \(\gamma _{0}>0\) and \(c_{0}>0\) such that for any \(\gamma \geq \gamma _{0}\),

$$\begin{aligned} P(S_{n}>x)\lesssim c_{0}\nu \sum _{i=1}^{n}L_{F_{i}}^{-1} \overline{F}_{i}(x) \end{aligned}$$

uniformly for all \(x\geq \gamma n\) as \(n\to \infty \), which is equivalent to

$$ \limsup_{n\to \infty }\sup_{x\geq \gamma n} \frac{P(S_{n}>x)}{\nu \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)}\leq c_{0}. $$

Proof

By Assumption 1.5, for any \(0<\varepsilon _{3}<1\), there exist some \(0<\omega _{2}=\omega _{2}( \varepsilon _{3})<1\) and \(x_{3}=x_{3}(\varepsilon _{3})>0\), irrespective to i, such that for all \(i\geq 1\), \(\omega _{2}\leq \omega <1\), and \(x\geq x_{3}\),

$$\begin{aligned} \overline{F_{i}}(\omega x)\leq \bigl(L_{F_{i}}^{-1}+ \varepsilon _{3} \bigr)\overline{F _{i}}(x). \end{aligned}$$
(2.17)

Take any constant δ such that \(\omega _{2}\leq \delta <1\). By Assumption 1.3 there exist \(0< M_{U1}<\infty \), \(0< M_{L1}< \infty \), and \(x_{4}>0\) such that for all \(i\geq 1\) and \(x\geq x_{4}\),

$$\begin{aligned} M_{L1}\overline{F}\bigl(\delta h^{-1} x\bigr)\leq \overline{F_{i}}\bigl(\delta h^{-1} x\bigr) \end{aligned}$$
(2.18)

and for all \(i\geq 1\) and \(x>0\),

$$\begin{aligned} \overline{F_{i}}\bigl(\delta h^{-1} x\bigr)\leq M_{U1}\overline{F}\bigl(\delta h^{-1} x\bigr). \end{aligned}$$
(2.19)

Since \(F\in \mathscr{D}\), by (2.11), for \(p>J_{F}^{+}\), there exist \(x_{5}>x_{4}\) and \(B_{0}>0\) such that for all \(x>x_{5}\) and \(k\geq 1\),

$$\begin{aligned} \overline{F_{k}}\bigl(\delta h^{-1}x\bigr)\leq B_{0}h^{p}\overline{F_{k}}(\delta x). \end{aligned}$$
(2.20)

For all \(x>0\) and \(n\geq 1\), we have the decomposition

$$\begin{aligned} P(S_{n}>x) =& \sum_{k_{1}=1}^{h}\cdots \sum_{k_{n}=1}^{h}P \Biggl(\sum _{i=1} ^{n}\sum_{j=1}^{k_{i}}X_{ij}>x \Biggr)\prod_{i=1}^{n}\mathrm {P}(Y_{i}=k _{i}) \\ =& \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h} \Biggl\{ P \Biggl(\sum _{i=1} ^{n}\sum_{j=1}^{k_{i}}X_{ij}>x, \mathop{\bigcup }_{1\leq i\leq n, 1 \leq j< k\leq k_{i}}\bigl\{ X_{ij}>\delta h^{-1} x, X_{ik}>\delta h^{-1} x\bigr\} \Biggr) \\ &{} +P \Biggl(\sum_{i=1}^{n}\sum _{j=1}^{k_{i}}X_{ij}>x, \mathop{ \bigcup }_{1\leq i< l\leq n, 1\leq j\leq k_{i}, 1\leq r\leq k _{l}}\bigl\{ X_{ij}>\delta h^{-1} x, X_{lr}>\delta h^{-1} x\bigr\} \Biggr) \\ &{} +P \Biggl(\sum_{i=1}^{n}\sum _{j=1}^{k_{i}}X_{ij}>x, \mathop{ \bigcap }_{1\leq i\leq n, 1\leq j\leq k_{i}}\bigl\{ X_{ij}\leq \delta h^{-1} x\bigr\} \Biggr) \\ &{} +P \Biggl(\sum_{i=1}^{n}\sum _{j=1}^{k_{i}}X_{ij}>x, \mathop{ \bigcup }_{1\leq i\leq n, 1\leq j\leq k_{i}}\bigl\{ X_{ij}>\delta h ^{-1} x, X_{ik}\leq \delta h^{-1} x, X_{lr}\leq \delta h^{-1} x, \\ &\quad 1\leq k\ne j\leq k_{i}, 1\leq l\ne i\leq n, 1\leq r\leq k_{l}\bigr\} \Biggr) \Biggr\} \prod_{i=1}^{n} \mathrm {P}(Y_{i}=k_{i}) \\ =:&\sum_{i=1}^{4}I_{i}(x,n). \end{aligned}$$
(2.21)

For \(I_{1}(x,n)\), by Assumption 1.4 we have that for all \(n\geq 1\) and \(x\geq \gamma n\),

$$\begin{aligned} I_{1}(x,n) \leq & \sum_{k_{1}=1}^{h}\cdots \sum_{k_{n}=1}^{h}\sum _{i=1} ^{n}\sum_{1\leq j< k\leq k_{i}}P \bigl(X_{ij}>\delta h^{-1} x, X_{ik}>\delta h^{-1} x\bigr)\prod_{i=1}^{n}P(Y_{i}=k_{i}) \\ \leq & \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{i=1}^{n} \sum_{1\leq j< k\leq k_{i}} \bigl(\overline{F_{i}}\bigl(\delta h^{-1} x\bigr) \bigr) ^{2}\prod_{i=1}^{n}P(Y_{i}=k_{i}) \\ \leq & \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{i=1}^{n}k_{i} ^{2} \bigl(\overline{F_{i}}\bigl(\delta h^{-1} x\bigr) \bigr)^{2}\prod_{i=1} ^{n}P(Y_{i}=k_{i}) \\ =& EY^{2}\sum_{i=1}^{n} \bigl(\overline{F_{i}}\bigl(\delta h^{-1} x\bigr) \bigr) ^{2}. \end{aligned}$$

By (2.20), (2.17), (2.19), and \(L_{F_{i}}^{-1}\geq 1\), \(i\geq 1\), we obtain that

$$\begin{aligned} & \limsup_{n\to \infty }\sup_{x\geq \gamma n} \frac{I_{1}(x,n)}{\nu \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \frac{EY^{2}B_{0}h^{p}}{\nu }\limsup_{n\to \infty }\sup _{x\geq \gamma n}\frac{\sum_{i=1}^{n}\overline{F_{i}}(\delta x)\overline{F_{i}}(\delta h^{-1}x)}{ \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \frac{EY^{2}B_{0}h^{p}}{\nu }\limsup_{n\to \infty }\sup _{x\geq \gamma n} \frac{\sum_{i=1}^{n} (L_{F_{i}}^{-1}+\varepsilon _{3} )\overline{F_{i}}(x)\overline{F_{i}}(\delta h^{-1} x)}{ \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \frac{EY^{2}B_{0}h^{p}M_{U1}}{\nu }\limsup_{n\to \infty }\sup _{x\geq \gamma n} \frac{\sum_{i=1}^{n} (L_{F_{i}}^{-1}+\varepsilon _{3} )\overline{F_{i}}(x)}{\sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \cdot \lim_{x\to \infty } \overline{F}\bigl(\delta h^{-1} x\bigr) \\ &\quad = 0. \end{aligned}$$
(2.22)

For \(I_{2}(x,n)\), similarly to \(I_{1}(x,n)\), by Assumption 1.4 we have that for all \(n\geq 1\) and \(x\geq \gamma n\),

$$\begin{aligned} &I_{2}(x,n) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{1\leq i< l\leq n} \sum_{1\leq j\leq k_{i}, 1\leq r\leq k_{l}}P \bigl(X_{ij}>\delta h^{-1} x, X _{lr}>\delta h^{-1} x\bigr)\prod_{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{1\leq i< l\leq n} \sum_{1\leq j\leq k_{i}, 1\leq r\leq k_{l}} \overline{F_{i}}\bigl(\delta h ^{-1} x\bigr) \overline{F_{l}}\bigl(\delta h^{-1} x\bigr)\prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad = \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum_{1\leq i< l\leq n} k _{i}k_{l}\overline{F_{i}}\bigl(\delta h^{-1} x\bigr)\overline{F_{l}}\bigl(\delta h ^{-1} x\bigr)\prod_{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq EY^{2} \Biggl(\sum_{i=1}^{n} \overline{F_{i}}\bigl(\delta h^{-1} x\bigr) \Biggr) \Biggl(\sum_{l=1}^{n} \overline{F_{l}}\bigl(\delta h^{-1} x\bigr) \Biggr). \end{aligned}$$

Thus from (2.20), (2.17), and (2.19) we get that

$$\begin{aligned} &\limsup_{n\to \infty }\sup_{x\geq \gamma n} \frac{I_{2}(x,n)}{\nu \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \frac{EY^{2}B_{0}h^{p}}{\nu }\limsup_{n\to \infty }\sup _{x\geq \gamma n}\frac{ (\sum_{i=1}^{n}\overline{F_{i}}(\delta x) ) (\sum_{l=1}^{n}\overline{F_{l}}(\delta h^{-1} x) )}{\sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \frac{EY^{2}B_{0}h^{p}}{\nu }\limsup_{n\to \infty }\sup _{x\geq \gamma n} \frac{ (\sum_{i=1}^{n} (L_{F_{i}}^{-1}+\varepsilon _{3} )\overline{F_{i}}(x) ) (\sum_{l=1}^{n}\overline{F_{l}}(\delta h^{-1} x) )}{\nu \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \frac{EY^{2}B_{0}h^{p}M_{U1}}{\gamma \nu }\limsup_{n\to \infty }\sup _{x\geq \gamma n} \frac{\sum_{i=1}^{n} (L_{F_{i}}^{-1}+\varepsilon _{3} )\overline{F_{i}}(x)}{\sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \cdot \limsup _{x\to \infty }x\overline{F}\bigl(\delta h^{-1} x\bigr) \\ &\quad = 0. \end{aligned}$$
(2.23)

For \(I_{3}(x,n)\), by Assumption 1.4 we have that for all \(n\geq 1\) and \(x\geq \gamma n\),

$$\begin{aligned} &I_{3}(x,n) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{i=1}^{n}P \Biggl(\sum _{j=1}^{k_{i}}X_{ij}> \frac{x}{n}, \sum_{l=1}^{n} \sum_{j=1}^{k_{l}}X_{lj}-\sum _{j=1}^{k_{i}}X_{ij}> \bigl(1-k_{i}\delta h^{-1}\bigr)x \Biggr)\prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{1\leq i\ne l\leq n}P \Biggl(\sum_{j=1}^{k_{i}}X_{ij}> \frac{x}{n}, \sum_{j=1}^{k_{l}}X_{lj}> \frac{(1-k_{i}\delta h^{-1})x}{n-1} \Biggr)\prod_{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{1\leq i\ne l\leq n}\sum_{j=1}^{k_{i}} \sum_{r=1}^{k_{l}} P \biggl(X_{ij}> \frac{x}{nk_{i}}, X_{lr}>\frac{(1-k_{i}\delta h^{-1})x}{(n-1)k_{l}} \biggr)\prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{1\leq i\ne l\leq n}\sum_{j=1}^{k_{i}} \sum_{r=1}^{k_{l}} P \biggl(X_{ij}> \frac{x}{nh}, X_{lr}>\frac{(1-\delta )x}{(n-1)h} \biggr)\prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{1\leq i\ne l\leq n}k_{i}k_{l} \overline{F_{i}} \biggl(\frac{x}{nh} \biggr) \overline{F_{l}} \biggl(\frac{(1-\delta )x}{(n-1)h} \biggr)\prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad = EY^{2}\sum_{1\leq i\ne l\leq n} \overline{F_{i}} \biggl(\frac{x}{nh} \biggr) \overline{F_{l}} \biggl(\frac{(1-\delta )x}{(n-1)h} \biggr). \end{aligned}$$

Furthermore, from Lemma 2.1(1) we know that for \(p>J_{F} ^{+}\), there exist positive constants \(C_{4}\) and \(C_{5}\) such that for all \(x\geq y\geq C_{5}\),

$$\begin{aligned} \overline{F} (y )\leq C_{4} \biggl(\frac{x}{y} \biggr)^{p} \overline{F}(x). \end{aligned}$$
(2.24)

Thus, by (2.18), (2.19), (2.24), and \(E X_{ij}^{2p+1}<\infty \), \(i, j\geq 1\), taking \(\gamma _{0}>hC_{5}(1-\delta _{0})^{-1}\), we have that for any \(\gamma \geq \gamma _{0}\),

$$\begin{aligned} \limsup_{n\to \infty }\sup_{x\geq \gamma n} \frac{I_{3}(x,n)}{\nu \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \leq & \frac{EY^{2}M_{U1}^{2}}{\nu M_{L1}}\limsup_{n\to \infty } \sup_{x\geq \gamma n} \frac{n\overline{F} (\frac{x}{nh} )\overline{F} (\frac{(1-\delta )x}{(n-1)h} )}{\overline{F}(x)} \\ \leq & \frac{EY^{2}M_{U1}^{2}C_{4}^{2}h^{2p}}{\nu M_{L1}(1-\delta )^{p}} \limsup_{n\to \infty }\sup _{x\geq \gamma n}n^{2p+1}\overline{F}(x) \\ =& 0. \end{aligned}$$
(2.25)

For \(I_{4}(x,n)\), by (2.20) and (2.17), for any \(\gamma >0\) and \(x\geq \gamma n\), as \(n\to \infty \), we have that

$$\begin{aligned} &I_{4}(x,n) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}P \Bigl(\mathop{\bigcup }_{1\leq i\leq n, 1\leq j\leq k_{i}}\bigl\{ X_{ij}>\delta h^{-1} x, X_{ik}\leq \delta h^{-1} x, X_{lr}\leq \delta h^{-1} x, 1\leq k\ne j\leq k_{i}, \\ &\qquad 1\leq l\ne i\leq n, 1\leq r\leq k_{l}\bigr\} \Bigr) \prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad \leq \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{i=1}^{n}\sum_{j=1}^{k_{i}} \overline{F_{i}}\bigl(\delta h^{-1} x\bigr)\prod _{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad = \sum_{k_{1}=1}^{h}\cdots\sum _{k_{n}=1}^{h}\sum _{i=1}^{n}k_{i}\overline{F_{i}} \bigl(\delta h^{-1} x\bigr)\prod_{i=1}^{n}P(Y_{i}=k_{i}) \\ &\quad = \nu \sum_{i=1}^{n} \overline{F_{i}}\bigl(\delta h^{-1} x\bigr) \\ &\quad \leq \nu B_{0}h^{p} \sum _{i=1}^{n}\overline{F_{i}}(\delta x) \\ &\quad \leq \nu B_{0}h^{p} \sum _{i=1}^{n} \bigl(L_{F_{i}}^{-1}+ \varepsilon _{3} \bigr)\overline{F_{i}}(x) \\ &\quad \leq \nu B_{0}h^{p}(1+\varepsilon _{3})\sum_{i=1}^{n}L_{F_{i}}^{-1} \overline{F_{i}}(x). \end{aligned}$$

Taking \(c_{0}=B_{0}h ^{p}\) and letting \(\varepsilon _{3} \to 0\), we get

$$\begin{aligned} \limsup_{n\to \infty }\sup_{x\geq \gamma n} \frac{I_{4}(x,n)}{ \nu \sum_{i=1}^{n}L_{F_{i}}^{-1}\overline{F}_{i}(x)}\leq c_{0}. \end{aligned}$$
(2.26)

Then Lemma 2.6 holds by substituting (2.22), (2.23), (2.25), and (2.26) into (2.21). The proof of the lemma is completed. □

The following lemma is a restatement of Theorem 1(i) of Kočetova et al. [16].

Lemma 2.7

Let the interarrival times of the accidents \(\{\theta _{i}, i\geq 1\}\) be a sequence of positive and i.i.d. r.v.s with common mean \(\lambda ^{-1}\). Then for every \(a>\lambda \) and some \(b>1\),

$$ \lim_{t\to \infty }\sum_{n>at}b^{n}P( \tau _{n}\leq t)=0. $$

Proofs of main results

Proof of Theorem 1.1

For any \(0<\delta <1\) and \(\gamma >0\), we have that for all \(x\geq \gamma t\) and \(t>0\),

$$\begin{aligned} P\bigl(S(t)>x\bigr) \geq & \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t}P \Biggl( \sum_{i=1}^{n}\sum _{j=1}^{Y_{i}}X_{ij}>x, N(t)=n \Biggr) \\ \geq & \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t} P \Biggl(\bigcup _{i=1}^{n} \Biggl\{ \sum _{j=1}^{Y_{i}}X_{ij}>x, N(t)=n \Biggr\} \Biggr) \\ \geq & \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t} \Biggl\{ \sum _{i=1}^{n}P \Biggl(\sum _{j=1}^{Y_{i}}X_{ij}>x, N(t)=n \Biggr) \\ &{}-\sum_{1\leq l< m\leq n}P \Biggl(\sum _{j=1}^{Y_{l}}X_{lj}>x, \sum _{j=1}^{Y_{m}}X_{mj}>x, N(t)=n \Biggr) \Biggr\} \\ =:& J_{1}(x,t)-J_{2}(x,t). \end{aligned}$$
(3.1)

Firstly, by Lemma 2.3 we have that for each \(i\geq 1\) as \(x\to \infty \),

$$\begin{aligned} P \Biggl(\sum_{j=1}^{Y_{i}}X_{ij}>x \Biggr) =& \sum_{k=1}^{h}P \Biggl(\sum _{j=1}^{k}X_{ij}>x \Biggr)P(Y_{i}=k) \\ \lesssim & \sum_{k=1}^{h}k \overline{F_{i}}(x)L_{F_{i}}^{-1}P(Y_{i}=k) \\ =& \nu L_{F_{i}}^{-1}\overline{F_{i}}(x) \end{aligned}$$
(3.2)

and

$$\begin{aligned} P \Biggl(\sum_{j=1}^{Y_{i}}X_{ij}>x \Biggr) =& \sum_{k=1}^{h}P \Biggl(\sum _{j=1}^{k}X_{ij}>x \Biggr)P(Y_{i}=k) \\ \gtrsim & \sum_{k=1}^{h}k \overline{F_{i}}(x)L_{F_{i}}P(Y_{i}=k) \\ =& \nu L_{F_{i}}\overline{F_{i}}(x). \end{aligned}$$
(3.3)

For \(J_{1}(x,t)\), by (1.5), (3.3), and Lemma 2.2,

$$\begin{aligned} J_{1}(x,t) =& \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t}\sum _{i=1}^{n}\sum _{k=1}^{h}P \Biggl(\sum _{j=1}^{k}X_{ij}>x, N(t)=n |Y_{i}=k \Biggr)P(Y_{i}=k) \\ =& \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t}\sum_{i=1}^{n} \sum_{k=1}^{h}P \bigl(N(t)=n|Y_{i}=k \bigr)P \Biggl(\sum_{j=1}^{k}X_{ij}>x \Biggr)P(Y_{i}=k) \\ =& \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t}\sum_{i=1}^{n} \sum_{k=1}^{h}P \bigl(N^{*}(t)=n \bigr)P \Biggl(\sum_{j=1} ^{k}X_{ij}>x \Biggr)P(Y_{i}=k) \\ \geq &\inf_{1\leq k\leq h}P \biggl\{ \biggl\vert \frac{N^{*}(t)}{\lambda t}-1 \biggr\vert < \delta \biggr\} \sum _{i=1}^{(1-\delta )\lambda t}P \Biggl(\sum _{j=1}^{Y_{i}}X_{ij}>x \Biggr) \\ \gtrsim &\inf_{1\leq k\leq h}P \biggl\{ \biggl\vert \frac{N^{*}(t)}{ \lambda t}-1 \biggr\vert < \delta \biggr\} {\nu }\sum _{i=1}^{(1-\delta ) \lambda t}L_{F_{i}}\overline{F_{i}}(x) \\ \sim &{\nu }\sum_{i=1}^{(1- \delta )\lambda t}L_{F_{i}} \overline{F_{i}}(x) \end{aligned}$$
(3.4)

uniformly for all \(x\geq \gamma t\) as \(t\to \infty \).

Since \(\frac{M_{L}}{M_{U}}L_{F_{i}}\leq L_{F}\leq \frac{M_{U}}{M_{L}}L _{F_{i}}\), \(i\geq 1\), by (2.19) we have that

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{ \sum_{(1-\delta )\lambda t< i\leq \lambda t}L_{F_{i}}\overline{F_{i}}(x)}{ \sum_{i=1}^{(1-\delta )\lambda t}L_{F_{i}}\overline{F_{i}}(x)} \leq \frac{M_{U}^{3}\delta }{M_{L}^{3}(1-\delta )}. \end{aligned}$$

Letting \(\delta \downarrow 0\), we have

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{ \sum_{(1-\delta )\lambda t< i\leq \lambda t}L_{F_{i}}\overline{F_{i}}(x)}{ \sum_{i=1}^{(1-\delta )\lambda t}L_{F_{i}}\overline{F_{i}}(x)}=0. \end{aligned}$$

Thus

$$\begin{aligned} \liminf_{t\to \infty }\inf_{x\geq \gamma t} \frac{\sum_{i=1} ^{(1-\delta )\lambda t}L_{F_{i}}\overline{F_{i}}(x)}{\sum_{i=1} ^{\lambda t}L_{F_{i}}\overline{F_{i}}(x)}\geq 1. \end{aligned}$$
(3.5)

Therefore from (3.4) it follows that

$$\begin{aligned} \liminf_{t\to \infty }\inf_{x\geq \gamma t} \frac{J_{1}(x,t)}{ \nu \sum_{i=1}^{\lambda t}L_{F_{i}}\overline{F}_{i}(x)}\geq 1. \end{aligned}$$
(3.6)

For \(J_{2}(x,t)\), from Lemma 2.5 we know that for any \(\alpha >0\),

$$\begin{aligned} \lim_{n\to \infty }\sup_{x\geq \alpha n}\sup _{1\leq l< m\leq n}xP \Biggl(\sum_{j=1}^{Y_{l}}X_{lj}>x\Big| \sum_{j=1}^{Y_{m}}X_{mj}>x \Biggr)=0. \end{aligned}$$

Thus, for any \(\varepsilon >0\), there exists \(N>0\) such that for all \(n>N\), \(x\geq \alpha n\), and \(1\leq l< m\leq n\),

$$\begin{aligned} P \Biggl(\sum_{j=1}^{Y_{l}}X_{lj}>x\Big| \sum_{j=1}^{Y_{m}}X_{mj}>x \Biggr) \leq \varepsilon x^{-1}. \end{aligned}$$
(3.7)

Thus by the Fubini theorem, (3.2), and (3.7) we have that, uniformly for all \(x\geq \gamma t\),

$$\begin{aligned} &J_{2}(x,t) \\ &\quad = \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t}\sum _{1\leq l< m\leq n}\sum_{k_{l}=1}^{h} \sum_{k_{m}=1}^{h} \\ &\qquad P \Biggl(\sum_{j=1}^{k_{l}}X_{lj}>x, \sum_{j=1}^{k_{m}}X_{mj}>x, N(t)=n \Big|Y_{l}=k_{l}, Y_{m}=k_{m} \Biggr)P(Y_{l}=k_{l})P(Y_{m}=k_{m}) \\ &\quad = \sum_{(1-\delta )\lambda t\leq n\leq (1+\delta )\lambda t}\sum _{1\leq l< m\leq n}\sum_{k_{l}=1}^{h} \sum_{k_{m}=1}^{h}P \Biggl(\sum _{j=1}^{k_{l}}X_{lj}>x, \sum _{j=1}^{k_{m}}X_{mj}>x \Biggr) \\ &\qquad {}\cdot P\bigl(N(t)=n|Y_{l}=k_{l}, Y_{m}=k_{m}\bigr)P(Y_{l}=k_{l})P(Y_{m}=k_{m}) \\ &\quad \leq \sum_{1\leq l< m\leq (1+\delta )\lambda t}\sum _{k_{l}=1} ^{h}\sum_{k_{m}=1}^{h}P \Biggl(\sum_{j=1}^{k_{l}}X_{lj}>x, \sum_{j=1} ^{k_{m}}X_{mj}>x \Biggr)P(Y_{l}=k_{l})P(Y_{m}=k_{m}) \\ &\quad = \sum_{1\leq l< m\leq (1+\delta )\lambda t}P \Biggl(\sum _{j=1} ^{Y_{l}}X_{lj}>x, \sum _{j=1}^{Y_{m}}X_{mj}>x \Biggr) \\ &\quad = \sum_{1\leq l< m\leq (1+\delta )\lambda t}P \Biggl(\sum _{j=1} ^{Y_{l}}X_{lj}>x\Big|\sum _{j=1}^{Y_{m}}X_{mj}>x \Biggr)P \Biggl( \sum_{j=1}^{Y_{m}}X_{mj}>x \Biggr) \\ &\quad \leq \varepsilon x^{-1}(1+\delta )\lambda t\sum _{m=1}^{(1+ \delta )\lambda t}P \Biggl(\sum _{j=1}^{Y_{m}}X_{mj}>x \Biggr) \\ &\quad \lesssim \varepsilon \lambda (1+\delta )\gamma ^{-1}\sum _{m=1}^{(1+ \delta )\lambda t}\nu L_{F_{m}}^{-1} \overline{F_{m}}(x), \end{aligned}$$

where we used (3.7) in the sixth step and (3.2) in the last step. Since \(\frac{M_{L}}{M_{U}}L_{F_{i}}\leq L_{F}\leq \frac{M _{U}}{M_{L}}L_{F_{i}}\), \(i\geq 1\), by (2.19) and the above relation we have that

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{J_{2}(x,t)}{ \nu \sum_{i=1}^{\lambda t}L_{F_{i}}\overline{F}_{i}(x)} \leq & \varepsilon \lambda (1+\delta )\gamma ^{-1} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{\sum_{m=1}^{(1+\delta ) \lambda t}\nu L_{F_{m}}^{-1}\overline{F_{m}}(x)}{\nu \sum_{i=1}^{ \lambda t}L_{F_{i}}\overline{F}_{i}(x)} \\ \leq &\varepsilon \lambda (1+\delta )\gamma ^{-1}M_{U}^{3}M_{L}^{-3}L _{F}^{-2}. \end{aligned}$$

Letting \(\varepsilon \downarrow 0\), we get

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{J_{2}(x,t)}{ \nu \sum_{i=1}^{\lambda t}L_{F_{i}}\overline{F}_{i}(x)}=0. \end{aligned}$$
(3.8)

Therefore the theorem follows from (3.1), (3.6), and (3.8). This completes the proof.

Proof of Theorem 1.2

Using \(\gamma _{0}\) in Lemma 2.6 and taking \(\gamma _{1}=2\gamma _{0}\lambda \), for any \(0<\delta <1\) and \(\gamma \geq \gamma _{1}\), we have that for all \(x\geq \gamma t\) and \(t>0\),

$$\begin{aligned} P\bigl(S(t)>x\bigr) =& P\bigl(S(t)>x, N(t)\leq (1+\delta )\lambda t\bigr)+P \bigl(S(t)>x, N(t)>(1+ \delta )\lambda t\bigr) \\ =:& J_{3}(x,t)+J_{4}(x,t). \end{aligned}$$
(3.9)

For \(J_{3}(x,t)\), by Lemma 2.6 there exists a constant \(c_{0}>0\) such that, uniformly for all \(x\geq \gamma t\),

$$\begin{aligned} J_{3}(x,t) \leq & P \Biggl(\sum_{i=1}^{(1+\delta )\lambda t} \sum_{j=1} ^{Y_{i}}X_{ij}>x \Biggr) \\ \lesssim &{c_{0}} \nu \sum_{i=1}^{(1+ \delta )\lambda t}L_{F_{i}}^{-1} \overline{F}_{i}(x). \end{aligned}$$

Similarly to (3.5), we have that

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{\sum_{i=1} ^{(1+\delta )\lambda t}L_{F_{i}}^{-1}\overline{F_{i}}(x)}{\sum_{i=1}^{\lambda t}L_{F_{i}}^{-1}\overline{F_{i}}(x)}\leq 1. \end{aligned}$$

Thus

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{J_{3}(x,t)}{ \nu \sum_{i=1}^{\lambda t}L_{F_{i}}^{-1}\overline{F}_{i}(x)}\leq {c_{0}}. \end{aligned}$$
(3.10)

From (2.11) we know that for \(p>J_{F}^{+}\), there exist \(\varepsilon >0\) and a constant \(B_{2}>0\) such that for all \(x\geq \varepsilon \), \(i\geq 1\), and \(n\geq 1\),

$$ \overline{F_{i}}(x/n)\leq B_{2}n^{p} \overline{F_{i}}(x). $$

Then for all \(x\geq \varepsilon \), \(i\geq 1\), \(1\leq k\leq h\), and \(n\geq 1\),

$$\begin{aligned} P \Biggl(\sum_{j=1}^{k}X_{ij}> \frac{x}{n} \Biggr)\leq \sum_{j=1}^{k} \overline{F _{i}} \biggl(\frac{x}{nk} \biggr)\leq B_{2}k^{p+1}n^{p} \overline{F_{i}}(x). \end{aligned}$$
(3.11)

Substituting (3.11) into \(J_{4}(x,t)\), for the above \(p>J_{F} ^{+}\), we have that, uniformly for all \(x\geq \gamma t\),

$$\begin{aligned} J_{4}(x,t) \leq & \sum_{n>(1+\delta )\lambda t}P \Biggl(\sum_{i=1} ^{n}\sum _{j=1}^{Y_{i}}X_{ij}>x, \tau _{n}\leq t \Biggr) \\ \leq & \sum_{n>(1+\delta )\lambda t}\sum _{i=1}^{n}P \Biggl(\sum _{j=1} ^{Y_{i}}X_{ij}> \frac{x}{n}, \tau _{n}\leq t \Biggr) \\ =& \sum_{n>(1+\delta )\lambda t}\sum_{i=1}^{n} \sum_{k=1}^{h}P \Biggl(\sum _{j=1}^{k}X_{ij}> \frac{x}{n} \Biggr)P(\tau _{n}\leq t|Y_{i}=k)P(Y_{i}=k) \\ \leq & \sum_{n>(1+\delta )\lambda t}\sum _{i=1}^{n}\sum_{k=1}^{h}P \Biggl(\sum_{j=1}^{k}X_{ij}> \frac{x}{n} \Biggr)P \biggl(\sum_{1\leq j \neq i\leq n} \theta _{i}\leq t \biggr)P(Y_{1}=k) \\ =& \sum_{n>(1+\delta )\lambda t}\sum_{i=1}^{n} \sum_{k=1}^{h}P \Biggl(\sum _{j=1}^{k}X_{ij}> \frac{x}{n} \Biggr)P(\tau _{n-1}\leq t)P(Y_{1}=k) \\ \leq & \sum_{n>(1+\delta )\lambda t}\sum _{i=1}^{n}\sum_{k=1}^{h}B _{2}k^{p+1}n^{p}\overline{F_{i}}(x)P( \tau _{n-1}\leq t)P(Y_{1}=k) \\ =& B_{2}\mathrm{E}Y_{1}^{p+1}\sum _{n>(1+\delta )\lambda t}n^{p+1}P( \tau _{n-1}\leq t) \frac{1}{n}\sum_{i=1}^{n} \overline{F_{i}}(x). \end{aligned}$$

Hence we have that

$$\begin{aligned} &\limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{J _{4}(x,t)}{\nu \sum_{i=1}^{\lambda t}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \\ &\quad \leq \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{B _{2}\mathrm{E}Y_{1}^{p+1}}{\nu }\sum_{n>(1+\delta )\lambda t}n^{p+1} \mathrm {P}( \tau _{n-1}\leq t)\frac{\sum_{i=1}^{n}\overline{F_{i}}(x)}{n\sum_{i=1} ^{\lambda t}\overline{F}_{i}(x)} \\ &\quad \leq \frac{B_{2}\mathrm{E}Y_{1}^{p+1}}{\nu } \limsup_{t\to \infty }\sup _{x\geq \gamma t} \sum_{n>(1+\delta )\lambda t}n^{p+1} \mathrm {P}(\tau _{n-1}\leq t) \frac{ \sum_{i=1}^{n}\overline{F_{i}}(x)}{n\overline{F}_{1}(x)}. \end{aligned}$$
(3.12)

From (2.19) it follows that for sufficiently large x and all \(n\geq 1\),

$$\begin{aligned} \frac{\sum_{i=1}^{n}\overline{F_{i}}(x)}{n\overline{F_{1}}(x)} \leq \frac{nM _{U1}\overline{F}(x)}{nM_{L1}\overline{F}(x)}=\frac{M_{U1}}{M_{L1}}< \infty . \end{aligned}$$
(3.13)

Therefore from (3.12), (3.13), and Lemma 2.7 we obtain that

$$\begin{aligned} \limsup_{t\to \infty }\sup_{x\geq \gamma t} \frac{J_{4}(x,t)}{ \nu \sum_{i=1}^{\lambda t}L_{F_{i}}^{-1}\overline{F}_{i}(x)} \leq & \frac{M _{U1}B_{2}\mathrm{E}Y_{1}^{p+1}}{M_{L1}\nu } \limsup_{t\to \infty } \sum_{n>(1+\delta )\lambda t}n^{p+1}\mathrm {P}(\tau _{n-1}\leq t) \\ =& 0. \end{aligned}$$
(3.14)

Then (1.7) can be obtained by substituting (3.10) and (3.14) into (3.9) and taking \(c_{1}=c_{0}\). The proof of the theorem completed.

References

  1. 1.

    Asimit, A.V., Furman, E., Tang, Q., Vernic, R.: Asymptotics for risk capital allocations based on conditional tail expectation. Insur. Math. Econ. 49(3), 310–324 (2011)

  2. 2.

    Bingham, N., Goldie, C., Teugels, J.: Regular Variation. Cambridge University Press, Cambridge (1987)

  3. 3.

    Block, H.W., Savits, T.H., Shaked, M.: Some concepts of negative dependence. Ann. Probab. 10(3), 765–772 (1982)

  4. 4.

    Chen, Y., Ng, K.W.: The ruin probability of the renewal model with constant interest force and negatively dependent heavy-tailed claims. Insur. Math. Econ. 40(3), 415–423 (2007)

  5. 5.

    Chen, Y., Yuen, K.: Precise large deviations of aggregate claims in a size-dependent renewal risk model. Insur. Math. Econ. 51(2), 457–461 (2012)

  6. 6.

    Chen, Y., Zhang, W., Chun, S.U.: Precise large deviations for generalized dependent compound renewal risk model with consistent variation. Front. Math. China 9(1), 31–44 (2014)

  7. 7.

    Denisov, D., Foss, S., Korshunov, D.: Tail asymptotics for the supremum of a random walk when the mean is not finite. Queueing Syst. 46(1–2), 15–33 (2004)

  8. 8.

    Ebrahimi, N., Ghosh, M.: Multivariate negative dependence. Commun. Stat., Theory Methods 10(4), 307–337 (1981)

  9. 9.

    Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events for Insurance and Finance. Springer, Berlin (1997)

  10. 10.

    Foss, S., Korshunov, D., Zachary, S.: An Introduction to Heavy-Tailed and Subexponential Distributions, 2nd edn. Springer, New York (2013)

  11. 11.

    Gao, Q., Liu, X.: Uniform asymptotics for the finite-time ruin probability with upper tail asymptotically independent claims and constant force of interest. Stat. Probab. Lett. 83(6), 1527–1538 (2013)

  12. 12.

    Geluk, J., Tang, Q.: Asymptotic tail probabilities of sums of dependent subexponential random variables. J. Theor. Probab. 22(4), 871–882 (2009)

  13. 13.

    Hao, X., Tang, Q.: A uniform asymptotic estimate for discounted aggregate claims with subexponential tails. Insur. Math. Econ. 43(1), 116–120 (2008)

  14. 14.

    He, W., Cheng, D., Wang, Y.: Asymptotic lower bounds of precise large deviations with nonnegative and dependent random variables. Stat. Probab. Lett. 83(1), 331–338 (2013)

  15. 15.

    Klüppelberg, C., Stadtmüller, U.: Ruin probabilities in the presence of heavy-tails and interest rates. Scand. Actuar. J. 1998(1), 49–58 (1998)

  16. 16.

    Kočetova, J., Leipus, R., Šiaulys, J.: A property of the renewal counting process with application to the finite-time ruin probability. Lith. Math. J. 49(1), 55–61 (2009)

  17. 17.

    Konstantinides, D., Loukissas, F.: Precise large deviations for consistently varying-tailed distributions in the compound renewal risk model. Lith. Math. J. 50(4), 391–400 (2010)

  18. 18.

    Konstantinides, D., Loukissas, F.: Precise large deviations for sums of negatively dependent random variables with common long-tailed distributions. Commun. Stat., Theory Methods 40(19–20), 3663–3671 (2011)

  19. 19.

    Lehmann, E.: Some concepts of dependence. Ann. Math. Stat. 37(5), 1137–1153 (1966)

  20. 20.

    Li, J.: On pairwise quasi-asymptotically independent random variables and their applications. Stat. Probab. Lett. 83(9), 2081–2087 (2013)

  21. 21.

    Liu, X., Chai, C., Gao, Q.: Precise large deviations of the aggregate claims in a compound renewal risk model with dependence structures (2018)

  22. 22.

    Liu, X., Gao, Q., Wang, Y.: A note on a dependent risk model with constant interest rate. Stat. Probab. Lett. 82(4), 707–712 (2012)

  23. 23.

    Peng, J., Wang, D.: Asymptotics for ruin probabilities of a non-standard renewal risk model with dependence structures and exponential Lévy process investment returns. J. Ind. Manag. Optim. 13, 155–185 (2017)

  24. 24.

    Peng, J., Wang, D.: Uniform asymptotics for ruin probabilities in a dependent renewal risk model with stochastic return on investments. Stoch. Int. J. Probab. Stoch. Process. 90(3), 432–471 (2018)

  25. 25.

    Tang, Q.: Heavy tails of discounted aggregate claims in the continuous-time renewal model. J. Appl. Probab. 44(2), 285–294 (2007)

  26. 26.

    Tang, Q., Su, C., Jiang, T., Zhang, J.: Large deviations for heavy-tailed random sums in compound renewal model. Stat. Probab. Lett. 52(1), 91–100 (2001)

  27. 27.

    Tang, Q., Tsitsiashvili, G.: Precise estimates for the ruin probability in finite horizon in a discrete-time model with heavy-tailed insurance and financial risks. Stoch. Process. Appl. 108(2), 299–325 (2003)

  28. 28.

    Wang, D.: Finite-time ruin probability with heavy-tailed claims and constant interest rate. Stoch. Models 24(1), 41–57 (2008)

  29. 29.

    Wang, K., Wang Y., Gao, Q.: Uniform asymptotics for the finite-time ruin probability of a dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15(1), 109–124 (2013)

  30. 30.

    Wang, K., Yang, Y., Lin, J.: Precise large deviations for widely orthant dependent random variables with dominatedly varying tails. Front. Math. China 7(5), 919–932 (2012)

  31. 31.

    Yang, Y., Wang, K.: Precise large deviations for dependent random variables with applications to the compound renewal risk model. Rocky Mt. J. Math. 4(43), 1395–1414 (2013)

  32. 32.

    Yang, Y., Wang, K., Konstantinides, D.G.: Uniform asymptotics for discounted aggregate claims in dependent risk models. J. Appl. Probab. 51(3), 669–684 (2014)

  33. 33.

    Yang, Y., Wang, K., Liu, J., Zhang, Z.: Asymptotics for a bidimensional risk model with two geometric Levy price processes. J. Ind. Manag. Optim. 15, 481–505 (2019)

  34. 34.

    Zong, G.: Finite-time ruin probability of a nonstandard compound renewal risk model with constant force of interest. Front. Math. China 5(4), 801–809 (2010)

Download references

Acknowledgements

The authors wish to thank the referees and the Editor for their very valuable comments on an earlier version of this paper. This work was finished during a research visit of Kaiyong Wang to the University of Hong Kong. They would like to thank the Department of Statistics and Actuarial Science for its excellent hospitality.

Availability of data and materials

Not applicable.

Funding

This work is supported by the National Natural Science Foundation of China (No. 11401418), the Humanities and Social Science Foundation of the Ministry of Education of China (No. 18YJC910004), the 333 Talent Training Project of Jiangsu Province and the Jiangsu Province Key Discipline in the 13th Five-Year Plan.

Author information

Both authors contributed equally to this work. Both authors read and approved the final manuscript.

Correspondence to Kaiyong Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Compound renewal risk model
  • Precise large deviations
  • The dominated variation class
  • Upper tail asymptotically independence
  • Pairwise negatively quadrant dependence