Skip to main content

Bidimensional discrete-time risk models based on bivariate claim count time series

Abstract

In this paper, we consider a class of bidimensional discrete-time risk models, which are based on the assumptions that the claim counts obey some specific bivariate integer-valued time series such as bivariate Poisson MA (BPMA) and the bivariate Poisson AR (BPAR) processes. We derive the moment generating functions (m.g.f.’s) for these processes, and we present their explicit expressions for the adjustment coefficient functions. The asymptotic approximations (upper bounds) to three different types of ruin probabilities are discussed, and the marginal value-at-risk (VaR) for each model is obtained. Numerical examples are provided to compute the adjustment coefficients discussed in the paper.

Introduction

Bidimensional risk theory has gained a lot of attention in the last two decades due to its complexity and various uses in different fields. Chan et al. [1] studied three types of ruin probabilities with phase-type distributions. Yuen et al. [2] introduced the bivariate compound binomial model to approximate the finite-time survival probability of the bivariate compound Poisson model with common shock. Li et al. [3] studied the ruin probabilities of a bidimensional perturbed insurance risk model, and they obtained the upper bound for the infinite-time ruin probability by using the martingale technique. Avram et al. [4] studied the joint ruin problem for two insurance companies that divide between claims and premia in some specified proportions. Badescu et al. [5] extended the bidimesional risk models proposed by Avram et al. and derived the Laplace transform of the time until at least one insurer is ruined. For other important works on bidimensional risk models, see Cai and Li [6], Dang et al. [7], Chen et al. [8], etc.

The univariate integer-valued time series models, such as the integer-valued moving average (INMA), the integer-valued autoregressive (INAR), and the integer-valued moving average autoregressive (INARMA) models, are based on appropriate thinning operations. This category of models has been proposed by many authors: see, e.g., Al-Osh and Alzaid [9, 10] and McKenzie [11, 12]. Quoreshi [13] proposed a bivariate integer-valued moving average (BINMA) model which is applied to examining the correlation between the stock transaction series. Pedeli and Karlis [14] introduced a bivariate integer-valued \(\operatorname{AR}(1)\) (\(\operatorname{BINAR}(1)\)) model with the innovations of bivariate Poisson (BP) and bivariate negative binomial (BNB) distributions.

Considering the dependent relationship of the claim counts among different periods, the univariate integer-valued time series has been applied to describe it, Cossétte et al. [15, 16] applied the Poisson \(\operatorname{MA}(1)\) and Poisson \(\operatorname{AR}(1)\) processes to discrete-time risk models.

Let us consider an example in car insurance policies. They usually contain at least two responsibilities: the third-party insurance and CDW coverage. If we regard the claim counts of each one responsibility of the policies as integer-valued time series, and they are correlated, then the whole claim counts should be a bivariate integer-valued time series, and the models proposed by Pedeli and Karlis [14] meet this situation perfectly. In this paper, we extend their risk models to bidimensional contexts, and we study the bidimensional risk models based on the bivariate claim counts obeying bivariate integer-valued time series.

The paper is structured as follows. In the next section, we propose a class of general bidimensional risk models based on bivariate time series for the bivariate claim counts r.v.’s. In Section 3, we present the risk models based on the bivariate claim counts obeying bivariate Poisson \(\operatorname{MA}(1)\) (\(\operatorname{BPMA}(1)\)) and the bivariate Poisson \(\operatorname{AR}(1)\) (\(\operatorname{BPAR}(1)\)) process generated by binomial thinning operations. For each model, we examine its properties and derive the expressions for adjustment coefficient and its compound distributions. In Section 4, we present the asymptotic approximations to the three different types of ruin probabilities by large deviation theorems for our models. A numerical example is provided to show the adjustment coefficients and the marginal VaR values in Section 5. Besides, the detailed proofs of the important results are presented in the Appendix.

Bidimensional discrete-time risk models

In this section, we consider the bidimensional risk model as follows. Let \((R_{1n},R_{2n})\) be the bidimensional discrete-time surplus process

$$ \left ( \begin{array}{@{}c@{}} R_{1n} \\ R_{2n} \end{array} \right ) =\left ( \begin{array}{@{}c@{}} u_{1} \\ u_{2} \end{array} \right ) +n\left ( \begin{array}{@{}c@{}} \pi_{1} \\ \pi_{2} \end{array} \right )- \sum_{i=1}^{n} \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{N_{1i}}{X_{ij}} \\ \sum_{k=1}^{N_{2i}}{Y_{ik}} \end{array} \right ), $$
(1)

where

  • \(u_{1}\) and \(u_{2}\) are the positive initial reserves of the first and second business, respectively;

  • \(\pi_{1}\) and \(\pi_{2}\) are the premia rates of the first and second business, respectively;

  • \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\) are the bivariate claim counts of the two businesses in ith period;

  • \(\{X_{ij}, i, j= 1, 2, \ldots\}\) are a sequence of i.i.d. claim size r.v.’s of the first business;

  • \(\{Y_{ik}, i, k= 1, 2, \ldots\}\) are a sequence of i.i.d. claim size r.v.’s of the second business;

  • \(\{X_{ij}, i, j= 1, 2, \ldots\}\) and \(\{Y_{ik}, i, k= 1, 2, \ldots \}\) are mutually independent.

In this paper, we adhere our concentrations to the light-tailed distributions. \(X_{ij}\) (\(Y_{ik}\)) are the copies of r.v. X (Y) whose distribution function (d.f.) is \(F(x)\), \(x> 0\) (\(G(y)\), \(y> 0\)), with mean \(\mu_{1}\) (\(\mu_{2}\)) and the m.g.f. \(m_{X}(t)\) (\(m_{Y}(s)\)).

Let \((N_{(1n)},N_{(2n)})=\sum_{i=1}^{n}(N_{1i},N_{2i})\) be the aggregate bivariate claim counts of n periods; \((W_{1i},W_{2i})=(\sum_{i=1}^{N_{1i}}X_{ij},\sum_{i=1}^{N_{1i}}Y_{ij})\) be the bivariate aggregate claims of the two businesses in ith period, \(i=1,2,\ldots\) ; and \((S_{1n},S_{2n})=\sum_{i=1}^{n}(W_{1i},W_{2i})\) be the aggregate bivariate claims for the two businesses. We can denote them with vector notation \(\mathbf{N}_{(n)}\), \(\mathbf{W}_{i}\), and \(\mathbf{S}_{n}\), respectively.

There are three types of ruin probabilities defined through different times of ruin:

  • \(T_{\mathrm{max}}=\inf\{n\mid\max\{R_{1n}, R_{2n}\}\leq0, n\in N^{+}\}\);

  • \(T_{\mathrm{sum}}=\inf\{n\mid R_{1n}+ R_{2n}\leq0, n\in N^{+}\}\);

  • \(T_{\mathrm{min}}=\inf\{n\mid\min\{R_{1n}, R_{2n}\}\leq0, n\in N^{+}\}\).

The corresponding ruin probabilities are denoted by

  • \(\Psi_{\mathrm{max}}(u_{1}, u_{2})\)=\(P\{T_{\mathrm{max}}< \infty\mid(R_{10}, R_{20})=(u_{1}, u_{2})\}\);

  • \(\Psi_{\mathrm{sum}}(u_{1}, u_{2})\)=\(P\{T_{\mathrm{sum}}< \infty\mid(R_{10}, R_{20})=(u_{1}, u_{2})\}\);

  • \(\Psi_{\mathrm{min}}(u_{1}, u_{2})\)=\(P\{T_{\mathrm{min}}< \infty\mid(R_{10}, R_{20})=(u_{1}, u_{2})\}\).

Claim count time series based on -thinning operation

In this section, we firstly introduce the bivariate Poisson distribution and the bivariate binomial thinning operator, ‘’. Let \(M_{1}\), \(M_{2}\), and M be three mutually independent Poisson r.v.’s with their corresponding parameters \(\lambda_{1}>0\), \(\lambda_{2}>0\), and \(\lambda>0\), according to Marshall and Olkin [17], \((U,V)=(M_{1}+M,M_{2}+M)\) (M is called the common shock r.v. in insurance field), obeys the bivariate Poisson (BP) distribution, whose probability mass function is

$$ P(U=n_{1},V=n_{2})=\sum _{n=0}^{n_{1}\wedge n_{2}} \frac{\lambda_{1}^{n_{1}-n}\lambda_{2}^{n_{2}-n}\lambda^{n}}{(n_{1}-n)!(n_{2}-n)!n!} \exp\{- \lambda_{1}-\lambda_{2}-\lambda\}, $$
(2)

where \(n_{1}\wedge n_{2} = \min(n_{1}, n_{2})\), and \(E[U]=\operatorname{Var}[U]=\lambda _{1}+\lambda\), \(E[V]=\operatorname{Var}[V]=\lambda_{1}+\lambda\), and \(\operatorname{Cov}[U,V]=\lambda\), we write it \(\operatorname{BP}(\lambda_{1},\lambda_{2},\lambda)\). Its probability generating function (p.g.f.) is

$$ \hat{P}(t,s)= \exp\bigl\{ \lambda_{1}(t-1)+ \lambda_{2}(s-1)+\lambda(ts-1)\bigr\} . $$
(3)

For the BP r.v.’s, Pedeli and Karlis [14] derived the bivariate binomial thinning operators referring simply to the univariate binomial thinning mechanism. Let the binomial thinning operator \(\alpha_{1}\) and \(\alpha_{2}\) (\(\alpha_{1},\alpha_{2}\in[0,1]\)) act on U and V, respectively, we can write it as

$$ \left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0& \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} U \\ V \end{array} \right )= \left ( \begin{array}{@{}c@{}} \sum_{i=1}^{U}\delta_{i}^{(1)} \\ \sum_{j=1}^{V}\delta_{j}^{(2)} \end{array} \right ), $$
(4)

where \(\{\delta_{i}^{(1)}, i= 1, 2, \ldots\}\), and \(\{\delta_{j}^{(2)}, j= 1, 2, \ldots\}\), are two mutually independent sequences of i.i.d. Bernoulli r.v.’s with mean \(\alpha_{1}\) and \(\alpha_{2}\), respectively. Furthermore, their joint p.g.f. is

$$\begin{aligned} \hat{P}(t,s) =& \exp \bigl\{ \lambda_{1}[ \alpha_{1}t+\bar{\alpha}_{1}-1] \bigr\} \times\exp \bigl\{ \lambda_{2}[\alpha_{2}s+\bar{\alpha}_{2}-1] \bigr\} \\ &{}\times \exp \bigl\{ \lambda\bigl[(\alpha_{1}t+\bar{ \alpha}_{1}) (\alpha_{2}s+\bar{\alpha} _{2})-1 \bigr] \bigr\} , \end{aligned}$$
(5)

where \(\bar{\alpha}_{1}=1-\alpha_{1}\) and \(\bar{\alpha}_{2}=1-\alpha_{2}\).

Risk model for \(\operatorname{BPMA}(1)\)

Definition and properties

Let us consider a \(\operatorname{BPMA}(1)\) process for \(\{(N_{1i},N_{2i}), i=1,2,\ldots\} \), its dynamics is defined as follows:

$$ \left ( \begin{array}{@{}c@{}} N_{1i} \\ N_{2i} \end{array} \right ) = \left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0 & \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} \varepsilon_{1,i-1} \\ \varepsilon_{2,i-1} \end{array} \right ) + \left ( \begin{array}{@{}c@{}} \varepsilon_{1i} \\ \varepsilon_{2i} \end{array} \right ),\quad i= 1,2,\ldots, $$
(6)

where \(\{(\varepsilon_{1i}, \varepsilon_{2i}), i=0,1,\ldots\}\) are a sequence of i.i.d. \(\operatorname{BP}(\lambda_{1},\lambda_{2},\lambda)\) r.v.’s. For discrimination, we have

$$ \alpha_{1}\circ\varepsilon_{1i}=\sum _{j=1}^{\varepsilon_{1i}}\delta _{i+1,i,j}^{(1)}, \qquad \alpha_{2}\circ\varepsilon_{2i}=\sum _{j=1}^{\varepsilon_{2i}}\delta _{i+1,i,j}^{(2)}, \quad i=1, 2, \ldots, $$
(7)

where \(\{\delta_{i,i-1,j}^{(1)},i,j=1, 2, \ldots\}\) and \(\{\delta _{i,i-1,j}^{(2)},i,j=1, 2, \ldots\}\) are two independent sequences of i.i.d. Bernoulli r.v.’s with means \(\alpha_{1}\) and \(\alpha_{2}\), respectively.

From (4) and (6), we have

$$\begin{aligned}& \left ( \begin{array}{@{}c@{}} N_{11} \\ N_{21} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \alpha_{1}\circ\varepsilon_{10}+\varepsilon_{11} \\ \alpha_{2}\circ\varepsilon_{20}+\varepsilon_{21} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{\varepsilon_{10}}\delta_{10j}^{(1)}+\varepsilon_{11} \\ \sum_{j=1}^{\varepsilon_{20}}\delta_{10j}^{(2)}+\varepsilon_{21} \end{array} \right ), \\& \left ( \begin{array}{@{}c@{}}N_{1k} \\ N_{2k} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \alpha_{1}\circ\varepsilon_{1,k-1}+\varepsilon_{1k} \\ \alpha_{2}\circ\varepsilon_{2,k-1}+\varepsilon_{2k} \end{array} \right ) =\left ( \begin{array}{@{}c@{}} \sum_{j=1}^{\varepsilon_{1k}}\delta_{k,k-1,j}^{(1)}+\varepsilon_{1k} \\ \sum_{j=1}^{\varepsilon_{2k}}\delta_{k,k-1,j}^{(2)}+\varepsilon_{2k} \end{array} \right ) \end{aligned}$$

for \(k=2,3\ldots\) .

As stated in Al-Osh and Alzaid [9, 10], the marginal distributions of \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\) are uniquely determined by the distributions of \(\{(\varepsilon_{1i},\varepsilon _{2i}),i=0,1,\ldots\}\), hence, they have identical bivariate Poisson margins and

$$\begin{aligned}& E[N_{1i}] = \operatorname{Var}[N_{1i}]=E[ \alpha_{1}\circ\varepsilon _{1,i-1}+\varepsilon_{1i}]=(1+ \alpha_{1}) (\lambda_{1}+\lambda), \\& E[N_{2i}] = \operatorname{Var}[N_{2i}]=E[ \alpha_{2}\circ\varepsilon _{2,i-1}+\varepsilon_{2i}]=(1+ \alpha_{2}) (\lambda_{2}+\lambda). \end{aligned}$$

The covariances are listed as follows:

$$\begin{aligned}& \operatorname{Cov}[N_{1i}, N_{2i}]=(1+ \alpha_{1}\alpha_{2})\lambda,\quad i=1,2\ldots ; \\& \operatorname{Cov}[N_{ki}, N_{k,i+h}] = \left \{ \begin{array}{l@{\quad}l} \alpha_{k}(\lambda_{1}+\lambda), &\text{if }h=1; \\ 0, &\text{if }h>1; \end{array} \right .\quad k=1, 2; \\& \operatorname{Cov}[N_{ki}, N_{j,i+h}]= \left \{ \begin{array}{l@{\quad}l} \alpha_{j}\lambda, &\text{if }h=1; \\ 0, &\text{if }h>1; \end{array} \right .\quad k\neq j=1, 2. \end{aligned}$$

Expression for adjustment coefficient function

Generally speaking, adjustment coefficients are regarded as the safety indices of the surplus processes, they are the positive zero-roots of the adjustment coefficient functions. In classical unidimensional Lundberg-type risk models, most of which assumed that the surplus processes are Lévy processes, the adjustment coefficient functions are obtained via martingale techniques: the cumulate generating functions (c.g.f.’s) of the net loss processes. However, in our risk models, the whole surplus processes are not Lévy processes anymore. According to Nyrhinen [18] and Müller and Pflug [19], there also exist adjustment coefficient functions for the unidimensional non-Lévy contexts using another approach: let \(c_{n}(t)\) be the c.g.f. of the aggregate net loss process (aggregate claims minus aggregate premia incomes) at time n, the adjustment coefficient function is given by \(c(t)=\lim_{n\rightarrow\infty}\frac {1}{n}c_{n}(t)\). In this subsection, we derive the joint c.g.f., which is denoted by \(c_{n}(t,s)\), of the aggregate net losses process based on model \(\operatorname{BPMA}(1)\). Analogously to Cossétte et al. [15], the adjustment coefficient function \(c(t,s)\) is given by \(c(t,s)=\lim_{n\rightarrow\infty}\frac{1}{n}c_{n}(t,s)\).

Proposition 3.1

The expression for \(c(t,s)\) for model \(\operatorname{BPMA}(1)\) is given by

$$\begin{aligned} c(t,s) =& \lambda_{1}\bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)-1\bigr) + \lambda_{2} \bigl(\alpha_{2}m_{Y}^{2}(s)+\bar{ \alpha}_{2}m_{Y}(t)-1\bigr) \\ &{}+ \lambda\bigl[\bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(\alpha_{2}m_{Y}^{2}(s) +\bar{\alpha}_{2}m_{Y}(t)\bigr)-1\bigr] -\pi_{1}t-\pi_{2}s. \end{aligned}$$
(8)

Proof

See the Appendix. □

Remark 3.1

Referring to the last item of (25), we have

$$\begin{aligned}& \lambda\bigl[ (n-1) \bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(\alpha_{2}m_{Y}^{2}(s)+ \bar{\alpha} _{2}m_{Y}(s)\bigr) \\& \quad {}+ \bigl(\alpha_{1}m_{X}(t)+\bar{ \alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}(s)+\bar{ \alpha}_{2}\bigr)+m_{X}(t)m_{Y}(s)-(n+1)\bigr]. \end{aligned}$$
(9)

It is very necessary to explain this item. As mentioned for the assumptions of (2), (4), and (6), we can decompose \((\varepsilon_{1i},\varepsilon_{2i})\) into \((\varepsilon _{1i}'+\varepsilon_{i}',\varepsilon_{2i}'+\varepsilon_{i}')\) for every \(i=0,1,2,\ldots\) , where \(\varepsilon_{1i}'\), \(\varepsilon_{2i}'\), and \(\varepsilon_{i}'\) are three mutually independent Poisson r.v.’s with their corresponding parameters \(\lambda_{1}\), \(\lambda_{2}\), and λ, and \(\varepsilon_{i}'\) is called a common shock r.v. Thus there exists a sub-\(\operatorname{BPMA}(1)\) process embedding in \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\), we denote \(\{(N_{i}^{\prime(1)},N_{i}^{\prime(2)}),i=1,2,\ldots\}\) and its expression is

$$ \left ( \begin{array}{@{}c@{}} N_{i}^{\prime(1)} \\ N_{i}^{\prime(2)} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \alpha_{1}\circ\varepsilon_{i-1}'+\varepsilon_{i}' \\ \alpha_{2}\circ\varepsilon_{i-1}'+\varepsilon_{i}' \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{\varepsilon_{i-1}'}\delta_{i,i-1,j}^{(1)}+\varepsilon_{i}' \\ \sum_{j=1}^{\varepsilon_{i-1}'}\delta_{i,i-1,j}^{(2)}+\varepsilon_{i}' \end{array} \right ), $$

here \(\{(\delta_{i,i-1,j}^{(1)},\delta_{i,i-1,j}^{(2)}),i=0,1,2,\ldots\} \) are also expressed as a sequence of i.i.d. bivariate Bernoulli r.v.’s (see Marshall and Olkin [17]) by some authors. Since in our assumptions, \(X_{ij}\) and \(Y_{ij}\) are mutually independent, the assumption of independence between \(\delta_{i,i-1,j}^{(1)}\) and \(\delta _{i,i-1,j}^{(2)}\) cannot cause a contradiction. Thus, (9) is the joint m.g.f. of the compound sub-\(\operatorname{BPMA}(1)\) process.

Proposition 3.2

\((S_{1n},S_{2n})\) follows a bivariate compound Poisson distribution. That means we can express it as

$$ S_{1n} = \left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(1n)}}C_{1j}^{(n)},&N_{(1n)}>0, \\ 0,&N_{(1n)}=0, \end{array} \right .\qquad S_{2n} = \left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(2n)}}C_{2j}^{(n)},&N_{(2n)}>0, \\ 0,&N_{(2n)}=0, \end{array} \right . $$

where \(N_{(1n)}\) and \(N_{(2n)}\) are of marginal Poisson distributions with the parameter \((n+\alpha_{1})(\lambda_{1}+\lambda)\) and \((n+\alpha _{2})(\lambda_{2}+\lambda)\), respectively; \(\{C_{1j}^{(n)},j=1,2,\ldots\}\) and \(\{C_{2j}^{(n)},j=1,2,\ldots\}\) are two mutually independent sequences of i.i.d. r.v.’s with the mixed convolutional d.f.’s as

$$\begin{aligned}& F_{C_{1j}^{(n)}}(x) = \frac{1}{n+\alpha_{1}} \bigl[(n-1)\alpha _{1}F^{*2}(x)+ \bigl(1+\alpha_{1}+(n-1)\bar{\alpha}_{1} \bigr)F(x) \bigr], \\& G_{C_{2j}^{(n)}}(y) = \frac{1}{n+\alpha_{2}} \bigl[(n-1)\alpha _{2}G^{*2}(y)+ \bigl(1+\alpha_{2}+(n-1)\bar{\alpha}_{2} \bigr)G(y) \bigr], \end{aligned}$$

where \(F^{*2}(x)\) and \(G^{*2}(y)\) are 2-fold convolutions.

If \(n\rightarrow\infty\), then \((N_{(1n)},N_{(2n)})\) asymptotically obeys \(\operatorname{BP}(n\lambda_{1},n\lambda_{2},n\lambda)\). Furthermore,

$$ F_{C_{1j}^{(n)}}(x)\stackrel{d}{\rightarrow}\alpha_{1}F^{*2}(x)+ \bar{\alpha} _{1}F(x),\qquad F_{C_{2j}^{(n)}}(y)\stackrel{d}{ \rightarrow}\alpha_{2}G^{*2}(y)+\bar{\alpha}_{2}G(y). $$

Proof

Referring to (25) and Remark 3.1, we easily get the conclusion. □

Risk model for \(\operatorname{BPAR}(1)\)

Definition and properties

We consider another bivariate time series model for count data to decide the relationship of the bivariate claim counts among the different periods. Suppose that the claim count process \(\{ (N_{1i},N_{2i}), i=1,2,\ldots\}\) is a bivariate Poisson \(\operatorname{AR}(1)\) (\(\operatorname{BPAR}(1)\)) process, whose autoregressive dynamics is given by

$$ \left ( \begin{array}{@{}c@{}} N_{1i} \\ N_{2i} \end{array} \right )= \left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0 & \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} N_{1,i-1} \\ N_{2,i-1} \end{array} \right ) + \left ( \begin{array}{@{}c@{}} \varepsilon_{1i} \\ \varepsilon_{2i} \end{array} \right ),\quad i= 1,2,\ldots, $$
(10)

where \(\{(\varepsilon_{1i},\varepsilon_{2i}), i=0, 1, \ldots\}\) are a sequence of i.i.d. \(\operatorname{BP}(\lambda_{1}, \lambda_{2}, \lambda)\) r.v.’s. If and only if \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\) is a stationary process, \(\alpha_{1}, \alpha_{2}\in[0,1)\). For convenience, we suppose \((N_{10},N_{20})\) to be the initial r.v. of the \(\operatorname{BPAR}(1)\) process and to be a copy of \((\varepsilon_{10},\varepsilon_{20})\). Similarly to \(\operatorname{BPMA}(1)\), the dependence structure of \(\operatorname{BPAR}(1)\) process can be unfolded as

$$ \left ( \begin{array}{@{}c@{}} N_{11} \\ N_{21} \end{array} \right ) =\left ( \begin{array}{@{}c@{}} \sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}}+\varepsilon_{11} \\ \sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}}+\varepsilon_{21} \end{array} \right ) $$

and

$$ \left ( \begin{array}{@{}c@{}} N_{1k} \\ N_{2k} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{N_{10}}{\prod_{h=1}^{k}{\delta_{h0j}^{(1)}}} + \sum_{i=1}^{k-1}\sum_{j=1}^{\varepsilon_{1i}}\prod_{h=i+1}^{k}{\delta _{hij}^{(1)}} + \varepsilon_{1k} \\ \sum_{j=1}^{N_{20}}{\prod_{h=1}^{k}{\delta_{h0j}^{(2)}}} + \sum_{i=1}^{k-1}\sum_{j=1}^{\varepsilon_{2i}}\prod_{h=i+1}^{k}{\delta _{hij}^{(2)}} + \varepsilon_{2k} \end{array} \right ) $$

for \(k=2,3\ldots\) .

As mentioned in previous subsection, \(\{\delta_{hij}^{(1)}, h>i=0,1,2,\ldots,j=1,2,\ldots\}\) and \(\{\delta_{hij}^{(2)}, h>i=0,1,2,\ldots,j=1,2,\ldots\}\) are two independent sequences of i.i.d. Bernoulli r.v.’s with mean \(\alpha_{1}\) and \(\alpha_{2}\), respectively. We can get the expectations and covariances of the \(\operatorname{BPAR}(1)\) process, for \(k,j=1,2\) ; \(i=1,2,\ldots\) , and \(h=0,1,2,\ldots\) ,

$$\begin{aligned}& E[N_{ki}]=\operatorname{Var}[N_{ki}] = \frac{\lambda_{k}+\lambda}{1-\alpha_{k}}, \qquad \operatorname{Cov}[N_{ki},N_{k,i+h}] = \alpha_{k}^{h}\frac{\lambda_{k}+\lambda}{1-\alpha_{k}}, \\& \operatorname{Corr}[N_{ki},N_{k,i+h}] = \alpha_{i}^{h}, \qquad \operatorname{Cov}[N_{ki},N_{j,i+h}] = \alpha_{j}^{h}\frac{\lambda}{1-\alpha_{1}\alpha_{2}}, \\& \operatorname{Corr}[N_{ki},N_{j,i+h}] =\alpha_{j}^{h} \frac{\lambda\sqrt{(1-\alpha_{1})(1-\alpha_{2})}}{ (1-\alpha_{1}\alpha_{2})\sqrt{(\lambda_{1}+\lambda)(\lambda_{2}+\lambda)}},\quad k\neq j=1,2. \end{aligned}$$

Expression for adjustment coefficient function

Proposition 3.3

Assuming that \(\alpha_{1},\alpha_{2}\in[0,1)\) and \(\alpha_{1}m_{X}(t)<1\), \(\alpha_{2}m_{Y}(s)<1\), then the expression for \(c(t,s)\) is given by

$$\begin{aligned} c(t,s) =& \lambda_{1} \biggl[\frac{\bar{\alpha}_{1}m_{X}(t)}{1-\alpha_{1}m_{X}(t)}-1 \biggr]+ \lambda_{2} \biggl[\frac{\bar{\alpha}_{2}m_{Y}(s)}{1-\alpha_{2}m_{Y}(s)}-1 \biggr] \\ &{}+ \lambda \biggl\{ \frac{\bar{\alpha}_{1}\bar{\alpha}_{2}m_{X}(t)m_{Y}(s)}{1-\alpha_{1}\alpha_{2}m_{X}(t)m_{Y}(s)} \biggl[ \frac{\alpha_{1}m_{X}(t)}{1-\alpha_{1}m_{X}(t)}+ \frac{\alpha_{2}m_{Y}(s)}{1-\alpha _{2}m_{Y}(s)}+1 \biggr]-1 \biggr\} \\ &{}-\pi_{1}t-\pi_{2}s; \end{aligned}$$
(11)

and for the special situation if \(\alpha_{1}=0\), \(\alpha_{2}>0\), and \(\alpha _{2}m_{Y}(s)<1\) still holds, we have

$$\begin{aligned} c(t,s) =& \lambda_{1} \bigl(m_{X}(t)-1 \bigr) + \lambda_{2} \biggl[\frac{\bar{\alpha}_{2}m_{Y}(s)}{1-\alpha_{2}m_{Y}(s)}-1 \biggr] \\ &{}+ \lambda \biggl[\frac{\bar{\alpha}_{2}m_{X}(t)m_{Y}(s)}{1-\alpha_{2}m_{Y}(s)}-1 \biggr] -\pi_{1}t- \pi_{2}s; \end{aligned}$$
(12)

symmetrically, if \(\alpha_{2}=0\), \(\alpha_{1}>0\), and \(\alpha_{1}m_{X}(s)<1\) still holds, then

$$\begin{aligned} c(t,s) =& \lambda_{1} \biggl[\frac{\bar{\alpha}_{1}m_{X}(t)}{1-\alpha_{1}m_{X}(t)}-1 \biggr] + \lambda_{2} \bigl(m_{Y}(s)-1 \bigr) \\ &{}+ \lambda \biggl[\frac{\bar{\alpha}_{1}m_{X}(t)m_{Y}(s)}{1-\alpha_{1}m_{X}(t)}-1 \biggr] -\pi_{1}t- \pi_{2}s. \end{aligned}$$
(13)

Proof

See the Appendix. □

Remark 3.2

Equally referring to the last item of (35), we focus on

$$ \lambda\Biggl[ A_{n}\bigl\{ m_{X}(t)\bigr\} B_{n}\bigl\{ m_{Y}(s)\bigr\} +m_{X}(t)m_{Y}(s) \sum_{i=0}^{n-1}{A_{i} \bigl\{ m_{X}(t)\bigr\} B_{i}\bigl\{ m_{Y}(s)\bigr\} }-(n+1)\Biggr]. $$
(14)

Referring to Remark 3.1, there exists a sub-\(\operatorname{BPAR}(1)\) process embedding in \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\), we denote it \(\{ (N_{i}^{\prime(1)},N_{i}^{\prime(2)}),i=1,2,\ldots\}\) with the dynamical formula

$$ \left ( \begin{array}{@{}c@{}} N_{i}^{\prime(1)} \\ N_{i}^{\prime(2)} \end{array} \right ) =\left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0 & \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} N_{i-1}^{\prime(1)} \\ N_{i-1}^{\prime(2)} \end{array} \right ) + \left ( \begin{array}{@{}c@{}} \varepsilon_{i}' \\ \varepsilon_{i}' \end{array} \right ) $$

for \(i=1,2,\ldots,n\); and \(\{\varepsilon_{i}', i=1,2,\ldots\}\) is a sequence of mutually independent r.v.’s with d.f.’s \(\operatorname{Po}(\lambda)\), \(N_{0}^{\prime(1)}=N_{0}^{\prime(2)}\) is a copy of \(\varepsilon_{1}'\). Equation (14) is the joint m.g.f. of \(\sum_{i=1}^{n} (\sum_{j=1}^{N_{i}^{\prime(1)}}X_{i,j}, \sum_{j=1}^{N_{i}^{\prime(2)}}Y_{i,j} )\).

Given (34) and Remark 3.2, we have the following conclusion.

Proposition 3.4

For \(0<\alpha_{1},\alpha_{2}<1\), \((S_{1n},S_{2n})\) follows the bivariate compound Poisson distribution as

$$ S_{1n}=\left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(1n)}}C_{1j}^{(n)},&N_{(1n)}>0, \\ 0,&N_{(1n)}=0, \end{array} \right .\qquad S_{2n}=\left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(2n)}}C_{2j}^{(n)},&N_{(2n)}>0, \\ 0,&N_{(2n)}=0, \end{array} \right . $$

where \(N_{(1n)}\) and \(N_{(2n)}\) are of marginal Poisson distributions with the parameters \((n+\alpha_{1})(\lambda_{1}+\lambda)\) and \((n+\alpha _{2})(\lambda_{2}+\lambda)\), respectively; and \(\{C_{1j}^{(n)},j=1,2,\ldots \}\) and \(\{C_{2j}^{(n)},j=1,2,\ldots\}\) are two independent sequences of i.i.d. r.v.’s with the mixed convolutional d.f.’s as

$$\begin{aligned}& F_{C_{1j}^{(n)}}(x) = \frac{1}{n+\alpha_{1}}\Biggl\{ (n\bar{\alpha}_{1}+ \bar{\alpha}_{1}\alpha_{1})F(x) \\& \hphantom{F_{C_{1j}^{(n)}}(x) =}{} + \sum_{i=2}^{n-1}\bigl[ \alpha_{1}+\alpha_{1}^{i}+(n-i)\bar{ \alpha}_{1}\alpha_{1}^{i}\bigr]F^{*i}(x) +\bigl(\alpha_{1}^{n-1}+\alpha_{1}^{n} \bigr)F^{*n}(x)\Biggr\} , \\& G_{C_{2j}^{(n)}}(y) = \frac{1}{n+\alpha_{2}}\Biggl\{ (n\bar{\alpha}_{2}+ \bar{\alpha}_{2}\alpha_{2})G(y) \\& \hphantom{G_{C_{2j}^{(n)}}(y) =}{} + \sum_{i=2}^{n-1}\bigl[ \alpha_{2}+\alpha_{2}^{i}+(n-i)\bar{ \alpha}_{2}\alpha_{2}^{i}\bigr]G^{*i}(y) + \bigl(\alpha_{2}^{n-1}+\alpha_{2}^{n} \bigr)G^{*n}(y)\Biggr\} . \end{aligned}$$

If \(n\rightarrow\infty\), then \((N_{(1n)},N_{(2n)})\) asymptotically obeys \(\operatorname{BP}(n\lambda_{1},n\lambda_{2},n\lambda)\), and furthermore,

$$ F_{C_{1j}^{(n)}}(x)\stackrel{d}{\rightarrow}\bar{\alpha}_{1}\sum _{i=1}^{\infty}\alpha_{1}^{i-1}F^{*i}(x), \qquad G_{C_{2j}^{(n)}}(y)\stackrel{d}{\rightarrow}\bar{\alpha}_{2} \sum_{i=1}^{\infty}\alpha_{2}^{i-1}G^{*i}(y). $$

Approximations to ruin probabilities

In this section, we mainly discuss the approximations to \(\Psi _{\mathrm{max}}(u_{1},u_{2})\) and \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\) for the two different models mentioned above.

Define \(t^{0}:=\sup\{t; m_{X}(t)<\infty\}\), \(s^{0}:=\sup\{s; m_{Y}(s)<\infty\}\), and \(G:=\{(t,s); c(t,s)<\infty, (t,s)>(0,0)\}\). Recalling the expressions of \(c(t,s)\) of the two different models we mentioned and the assumptions that \(X_{ij}\) (\(Y_{ij}\)) are of i.i.d. light-tailed distributions, it is clear that \(t^{0}>0\) and \(s^{0}>0\), and G is nonempty.

Lemma 4.1

As to the adjustment coefficient functions for \(\operatorname{BPMA}(1)\) and \(\operatorname{BPAR}(1)\) cases, the following statements hold.

  1. (a)

    The equation \(c(t,s)=0\) has at least one root in G.

  2. (b)

    For given \(l\geq0\), the equation \(c(t,lt)=0\) has only one root in \((0, t^{0})\).

  3. (c)

    For given \(l\geq0\), if \(v>0\) solves \(c(t,lt)=0\), then \(c(t,lt)>0\) for all \(t>v\) and \(c(t,lt)<0\) for all \(0< t<v\).

Proof

We only prove the \(\operatorname{BPMA}(1)\) case here, the \(\operatorname{BPAR}(1)\) case could be proved the same way.

Let \(s=lt\), for some given \(l\geq0\),

$$\begin{aligned} \frac{dc(t,lt)}{dt} =& \lambda_{1}m_{X}^{\prime}(t) \bigl(2\alpha_{1}m_{X}(t)+\bar{\alpha}_{1} \bigr) +\lambda_{2}lm_{Y}^{\prime}(lt) \bigl(2 \alpha_{2}m_{Y}(lt)+\alpha_{2}\bigr) \\ &{}+ \lambda\bigl[ m_{X}^{\prime}(t) \bigl(2 \alpha_{1}m_{X}(t)+\bar{\alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}^{2}(lt)+\bar{ \alpha}_{2}m_{Y}(lt)\bigr) \\ &{}+ lm_{Y}^{\prime}(lt) \bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(2 \alpha_{2}m_{Y}(lt)+\alpha_{2}\bigr)\bigr] \\ &{}- (1+\rho) (1+\alpha_{1}) (\lambda_{1}+\lambda) \mu_{1} - l(1+\rho) (1+\alpha_{2}) (\lambda_{2}+ \lambda)\mu_{2}, \end{aligned}$$

so that

$$\begin{aligned} {\biggl.\frac{dc(t,lt)}{dt}\biggr|_{t=0} } =& (1+\alpha_{1}) ( \lambda_{1}+\lambda)\mu_{1}+l(1+\alpha_{2}) ( \lambda_{2}+\lambda)\mu _{2} \\ &{}- (1+\rho) (1+\alpha_{1}) (\lambda_{1}+\lambda) \mu_{1}-l(1+\rho) (1+\alpha _{2}) (\lambda_{2}+ \lambda)\mu_{2} < 0. \end{aligned}$$

For every \(t>0\) and \(l\geq0\), we have \(\frac{d^{2}c(t,lt)}{dt^{2}}>0\).

This means that the function \(c(t,s)\) is convex in \(t\in(0,t^{0})\), since \(c_{1}^{\prime}(0,0)<0\) and \(c_{2}^{\prime}(0,0)<0\), \(m_{X}^{\prime}(t)\), and \(m_{Y}^{\prime}(s)\) are the monotone increasing functions of t and s. Obviously \(\frac{dc(t,lt)}{dt}\) is a monotone increasing function of t; given \(l\geq0\), there exist \(t_{1}>0\) and some \(l>0\) so that \(\frac{dc(t_{1},lt_{1})}{dt_{1}}>0\), and (a) and (b) are proved. Hence the equation for \(c(t,lt)\) can have at most one root in \((0,t^{0})\).

(c) The result is obvious from the convexity of \(c(t,lt)\) on \((0,t^{0})\). □

Lemma 4.2

Referring to Lemma  4.1, and letting \(\Delta=\{(t,s);c(t,s)=0,(t,s)\in G\}\), then

  1. (a)

    \(0<\frac{dc(t,s)}{dt}+\frac{dc(t,s)}{ds}<\infty\) for \((t,s)\in\Delta\);

  2. (b)

    \(c_{n}(t,s)<\infty\) for \((t,s)\in\Delta\).

Proof

(a) There exists \(l\geq0\) such that \(s=lt\), then \(c(t,s)=c(t,lt)=0\) for any \((t,s)\in\Delta\). By the intermediate point in the mean value theorem, there exists \(\xi\in(0,t)\) such that \(c(t,lt)-c(0,0)=c_{1}^{\prime}(\xi,l\xi)t+c_{2}^{\prime}(\xi,l\xi)lt=0\). Since \(\frac{d^{2}c(t,lt)}{dt^{2}}>0\) for \(t\geq0\), \(c_{1}^{\prime}(t,lt)+c_{2}^{\prime}(t,lt)l>c_{1}^{\prime}(\xi,l\xi)+c_{2}^{\prime}(\xi,l\xi)l=0\). Varying l from 0 to ∞, the conclusion is proved.

For (b), since for \(\operatorname{BPMA}(1)\) process,

$$\begin{aligned} c_{n}(t,s) =& (n-1)c(t,s) + \lambda_{1} \bigl[(1+\alpha_{1})m_{X}(t)+ \bar{\alpha}_{1}-2 \bigr] + \lambda_{2} \bigl[(1+\alpha_{2})m_{Y}(s)+ \bar{\alpha}_{2}-2 \bigr] \\ &{}+ \lambda \bigl[\bigl(\alpha_{1}m_{X}(t)+\bar{ \alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}(s)+\bar{ \alpha} _{2}\bigr)+m_{X}(t)m_{Y}(s)-2 \bigr]< \infty \end{aligned}$$

for \(\operatorname{BPAR}(1)\) process, we just present \(0<\alpha_{1},\alpha_{2}<1\) case,

$$\begin{aligned} c_{n}(t,s) =& (n-1)c(t,s) \\ &{}+ \lambda_{1} \biggl\{ \frac{m_{X}(t)[1-A_{n}\{m_{X}(t)\}+\bar{\alpha}_{1}]}{1-\alpha_{1}m_{X}(t)} +A_{n}\bigl\{ m_{X}(t)\bigr\} -2 \biggr\} \\ &{}+ \lambda_{2} \biggl\{ \frac{m_{Y}(s)[1-B_{n}\{m_{Y}(s)\}+\bar{\alpha}_{2}]}{1-\alpha_{1}m_{Y}(s)} +B_{n}\bigl\{ m_{Y}(s)\bigr\} -2 \biggr\} \\ &{}+ \lambda \biggl\{ \frac{m_{X}(t)m_{Y}(s)}{1-\alpha_{1}\alpha_{2}m_{X}(t)m_{Y}(s)}L' +A_{n}\bigl\{ m_{X}(t)\bigr\} B_{n}\bigl\{ m_{Y}(s)\bigr\} -2 \biggr\} , \end{aligned}$$

where

$$\begin{aligned} L' =& 1-A_{n-1}\bigl\{ m_{X}(t)\bigr\} B_{n-1}\bigl\{ m_{Y}(s)\bigr\} + \frac{\alpha_{1}\bar{\alpha}_{2}m_{X}(t)[1-A_{n-1}\{m_{X}(t)\}]}{1-\alpha _{1}m_{X}(t)} \\ &{}+ \frac{\bar{\alpha}_{1}\alpha_{2}m_{Y}(s)[1-B_{n-1}\{m_{Y}(s)\}]}{1-\alpha_{2}m_{Y}(s)}, \end{aligned}$$

since \(\alpha_{1}m_{X}(t)<1\) and \(\alpha_{2}m_{Y}(s)<1\) for every point \((t,s)\in\Delta\), recalling the expressions of \(A_{n}\{m_{X}(t)\}\) and \(B_{n}\{m_{Y}(s)\}\) we gave in the Appendix, \(c_{n}(t,s)<\infty\). So, the conclusion is proved. Here actually, Δ is a smooth curve on the first quadrant. □

As for the ruin problems of the bidimensional risk models, many authors just gave the upper bounds for \(\Psi_{\mathrm{max}}(u_{1},u_{2})\) via martingale inequalities (see Chan et al. [1] and Li et al. [3]) for Lévy processes, because the Cramér-Lundberg constants are hardly to be obtained. Since our risk models are non-Lévy processes, many classical results of ruin theory, especially the Wald martingale theorem, cannot be applied. The large deviations theorem has been introduced to approximate the ruin probabilities by Glynn and Whitt [20]. However, their attention mainly adhered to the univariate contexts. We borrow the idea of Glynn and Whitt [20] and extend their main results to the bivariate contexts.

Theorem 4.1

For the two risk models, \(\Psi_{\mathrm{max}}(u_{1},u_{2}) \stackrel{\mathrm{log}}{\sim}\inf_{(t,s)\in\Delta}{e^{-tu_{1}-su_{2}}}\) as \(u_{1},u_{2}\rightarrow\infty\), where \((t^{*},s^{*})=\arg\inf_{(t,s)\in\Delta}\{e^{-tu_{1}-su_{2}}\}\) is the adjustment coefficient.

Proof

See the Appendix. □

For \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\) is actually the univariate probability measure. By the theorem of the univariate large deviations, it is easy to get the following result.

Theorem 4.2

For the two risk models, \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\stackrel{\mathrm{log}}{\sim }e^{-t^{*}(u_{1}+u_{2})}\) as \(u_{1}+u_{2}\rightarrow\infty\), where \(t^{*}=\arg\{ t;c(t,t)=0, t>0\}\) is the adjustment coefficient.

Proof

The joint m.g.f. of \(S_{1n}+S_{2n}-n\pi_{1}-n\pi_{2}\) is \(c_{n}(t,t)\), and the expression for adjustment coefficient function is \(c(t,t)\), then, by the univariate large deviations theory proposed by Glynn and Whitt [20], we get the proof of the result. □

Theorem 4.3

For our two models, we have

$$ \Psi_{\mathrm{min}}(u_{1},u_{2})\leq\Psi_{1}(u_{1})+ \Psi_{2}(u_{2})-\Psi_{\mathrm{max}}(u_{1},u_{2}), $$

where \(\Psi_{1}(u_{1})\) and \(\Psi_{2}(u_{2})\) are the marginal ruin probabilities of the first and the second businesses, respectively. Furthermore, the approximations to \(\Psi_{1}(u_{1})\) and \(\Psi_{2}(u_{2})\) are presented as

$$\begin{aligned}& \Psi_{1}(u_{1})\stackrel{\mathrm{log}}{ \sim}e^{-t'u_{1}}, \quad u_{1}\rightarrow\infty, \end{aligned}$$
(15)
$$\begin{aligned}& \Psi_{2}(u_{2})\stackrel{\mathrm{log}}{ \sim}e^{-s'u_{2}},\quad u_{2}\rightarrow\infty, \end{aligned}$$
(16)

where for model \(\operatorname{BPMA}(1)\), \(t'\) and \(s'\) are the positive roots of \((\lambda_{1}+\lambda)(\alpha_{1}m_{X}^{2}(t')+\bar{\alpha}_{1}m_{X}(t')-1)-\pi_{1}t'\) and \((\lambda_{Y}+\lambda)(\alpha_{2}m_{Y}^{2}(s')+\bar{\alpha}_{2}m_{Y}(s')-1)-\pi _{2}s'\), respectively; and for model \(\operatorname{BPAR}(1)\), \(t'\) and \(s'\) are the positive roots of \((\lambda_{1}+\lambda)(\frac{\bar{\alpha} _{1}m_{X}(t')}{1-\alpha_{1}m_{X}(t')}-1)-\pi_{1}t'\) and \((\lambda_{2}+\lambda)(\frac {\bar{\alpha}_{2}m_{Y}(s')}{1-\alpha_{2}m_{Y}(s')}-1)-\pi_{2}s'\), respectively.

Proof

For the approximations to marginal ruin probabilities see Cossétte et al. [15]. □

Remark 4.1

For \(\Psi_{\mathrm{min}}(u_{1}, u_{2})\), since \(\Psi _{\mathrm{max}}(u_{1}, u_{2})=o(e^{-t'u_{1}-s'u_{2}})\),

$$ \Psi_{\mathrm{min}}(u_{1}, u_{2})\sim \Psi(u_{1})+\Psi(u_{2}), \quad \text {as } u_{1},u_{2}\rightarrow\infty. $$

Numerical experiments and simulations

Calculations for adjustment coefficients

Let \(u_{1}:u_{2}=2:3\), \(\lambda_{1}=5\), \(\lambda_{2}=3\), \(\lambda=2\), X, and Y follow the exponential distributions with parameters \(\beta_{1}\) and \(\beta_{2}\), respectively, i.e. \(\mu_{1}=1/\beta_{1}\), \(m_{X}(t)=\frac {1}{1-t/\beta_{1}}\), \(t<\beta_{1}\), and \(\mu_{2}=1/\beta_{2}\), \(m_{Y}(s)=\frac {1}{1-s/\beta_{2}}\), \(s<\beta_{2}\); the safety loading coefficient \(\rho =0.2\) for two classes of businesses. We calculate out three groups of adjustment coefficients. The first group of adjustment coefficients, \((t^{*},s^{*})\) for \(\Psi _{\mathrm{max}}(u_{1},u_{2})\); the second group of adjustment coefficients, \(t^{*}\) for \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\); and the third group of adjustment coefficients \(t'\) and \(s'\) for \(\Psi_{1}(u_{1})\) and \(\Psi_{2}(u_{2})\), respectively. For Tables 1-6, we arrange the values of \(\alpha_{1}\) in vertical rows and the values of \(\alpha_{2}\) in horizontal rows.

Table 1 The adjustment coefficient \(\pmb{(t^{*},s^{*})}\) of \(\pmb{\Psi_{\mathrm{max}}}\) for model \(\pmb{\operatorname{BPMA}(1)}\)
Table 2 The adjustment coefficient \(\pmb{(t^{*},s^{*})}\) of \(\pmb{\Psi_{\mathrm{max}}}\) for model \(\pmb{\operatorname{BPAR}(1)}\)
Table 3 The adjustment coefficient \(\pmb{t^{*}}\) of \(\pmb{\Psi_{\mathrm{sum}}}\) for model \(\pmb{\operatorname{BPMA}(1)}\)
Table 4 The adjustment coefficient \(\pmb{t^{*}}\) of \(\pmb{t^{*}}\) of \(\pmb{\Psi_{\mathrm{sum}}}\) for model \(\pmb{\operatorname{BPAR}(1)}\)
Table 5 Adjustment coefficients of the marginal ruin probabilities for model \(\pmb{\operatorname{BPMA}(1)}\)
Table 6 Adjustment coefficients of the marginal ruin probabilities for model \(\pmb{\operatorname{BPAR}(1)}\)

Calculations for marginal VaR

In this subsection, we give the marginal VaR values for our two different models with the assumptions that \(X_{ij}\) and \(Y_{ij}\) are of mutually independent exponential r.v.’s with parameter 1, \(i,j=1,2,\ldots\) . Then, according to Propositions 3.2 and 3.3, we derive the asymptotic densities of \(\mathbf{S}_{n}\) for large n as

$$\begin{aligned} f_{\mathbf{S}_{n}}(x,y) =& e^{-n(\lambda_{1}+\lambda_{2}+\lambda)} \\ &{}+e^{-n(\lambda_{1}+\lambda_{2}+\lambda)}\sum_{k_{1}=1}^{\infty}\sum _{k_{2}=1}^{\infty}\sum _{k=0}^{k_{1}\wedge k_{2}} \frac{n^{k_{1}+k_{2}-k}\lambda_{1}^{k_{1}-k}\lambda_{2}^{k_{2}-k}\lambda ^{k}}{(k_{1}-k)!(k_{2}-k)!k!} \\ &{}\times \sum_{i=0}^{k_{1}}\left ( \begin{array}{@{}c@{}} k_{1} \\ i \end{array} \right )\alpha_{1}^{i} \bar{\alpha}_{1}^{k_{1}-i}\operatorname{Ga}(x;k_{1}+i,1) \\ &{}\times \sum_{j=0}^{k_{2}}\left ( \begin{array}{@{}c@{}} k_{2} \\ j \end{array} \right )\alpha_{2}^{j} \bar{\alpha}_{2}^{k_{2}-j}\operatorname{Ga}(y;k_{1}+j,1) \end{aligned}$$

and

$$\begin{aligned} f_{\mathbf{S}_{n}}(x,y) =& e^{-n(\lambda_{1}+\lambda_{2}+\lambda)} \\ &{}+ e^{-n(\lambda_{1}+\lambda_{2}+\lambda)} \sum_{k_{1}=1}^{\infty}\sum _{k_{2}=1}^{\infty}\sum _{k=0}^{k_{1}\wedge k_{2}} \frac{n^{k_{1}+k_{2}-k}\lambda_{1}^{k_{1}-k}\lambda_{2}^{k_{2}-k}\lambda ^{k}}{(k_{1}-k)!(k_{2}-k)!k!} \\ &{}\times \bar{\alpha}_{1}^{k_{1}} \Biggl[\sum _{i=1}^{\infty}\alpha _{1}^{i-1} \operatorname{Ga}(x;i,1) \Biggr]^{*k_{1}} \bar{\alpha}_{2}^{k_{2}} \Biggl[\sum_{j=1}^{\infty}\alpha_{2}^{j-1} \operatorname{Ga}(y;j,1) \Biggr]^{*k_{2}} \end{aligned}$$

for model \(\operatorname{BPMA}(1)\) and model \(\operatorname{BPAR}(1)\), respectively, and where \(\operatorname{Ga}(x;i,1)\) and \(\operatorname{Ga}(y;j,1)\) are gamma densities with parameters \((i, 1)\) and \((j, 1)\) for the variables x and y, respectively. And we also get the marginal densities for model \(\operatorname{BPMA}(1)\):

$$\begin{aligned}& f_{S_{1n}}(x) = e^{-n(\lambda_{1}+\lambda)}+e^{-n(\lambda_{1}+\lambda)}\sum _{k_{1}=1}^{\infty }\frac{[n(\lambda_{1}+\lambda)]^{k_{1}}}{k_{1}!} \\& \hphantom{f_{S_{1n}}(x) =}{}\times \sum_{i=0}^{k_{1}} \left ( \begin{array}{@{}c@{}} k_{1} \\ i \end{array} \right )\alpha_{1}^{i} \bar{\alpha}_{1}^{k_{1}-i}\operatorname{Ga}(x;k_{1}+i,1), \\& f_{S_{2n}}(y) = e^{-n(\lambda_{2}+\lambda)}+e^{-n(\lambda_{2}+\lambda)}\sum _{k_{2}=1}^{\infty }\frac{[n(\lambda_{2}+\lambda)]^{k_{2}}}{k_{2}!} \\& \hphantom{f_{S_{2n}}(y) =}{}\times \sum_{j=0}^{k_{2}} \left ( \begin{array}{@{}c@{}} k_{2} \\ j \end{array} \right )\alpha_{2}^{j} \bar{\alpha}_{2}^{k_{2}-j}\operatorname{Ga}(y;k_{2}+j,1); \end{aligned}$$

and the marginal densities for model \(\operatorname{BPAR}(1)\) for \(0<\alpha_{1},\alpha_{2}<1\):

$$\begin{aligned}& f_{S_{1n}}(x) = e^{-n(\lambda_{1}+\lambda)}+e^{-n(\lambda_{1}+\lambda)}\sum _{k_{1}=1}^{\infty }\frac{[n(\lambda_{1}+\lambda)]^{k_{1}}}{k_{1}!} \\& \hphantom{f_{S_{1n}}(x) =}{} \times \bar{\alpha}_{1}^{k_{1}} \Biggl[\sum _{i=1}^{\infty}\alpha_{1}^{i-1} \operatorname{Ga}(x;i,1) \Biggr]^{*k_{1}}, \\& f_{S_{2n}}(y) = e^{-n(\lambda_{2}+\lambda)}+e^{-n(\lambda_{2}+\lambda)}\sum _{k_{2}=1}^{\infty }\frac{[n(\lambda_{2}+\lambda)]^{k_{2}}}{k_{2}!} \\& \hphantom{f_{S_{2n}}(y) =}{} \times \bar{\alpha}_{2}^{k_{2}} \Biggl[\sum _{j=1}^{\infty}\alpha_{2}^{j-1} \operatorname{Ga}(y;j,1) \Biggr]^{*k_{2}}. \end{aligned}$$

So, \(\operatorname{VaR}_{S_{1n}}(\cdot)=\operatorname{VaR}_{S_{2n}}(\cdot)\) if \(\alpha_{1}=\alpha_{2}\) and \(\lambda_{1}=\lambda_{2}\) at the same levels for the two models, denoted as \(\operatorname{VaR}_{S_{n}}(\cdot)\). Then the marginal VaR for each one is presented in Tables 7 and 8.

Table 7 Marginal VaR for model \(\pmb{\operatorname{BPMA}(1)}\)
Table 8 Marginal VaR for model \(\pmb{\operatorname{BPAR}(1)}\)

Conclusions and comments

In this paper, we propose a class of bidimensional discrete-time risk models whose bivariate claim counts obey the \(\operatorname{BPMA}(1)\) and \(\operatorname{BPAR}(1)\) processes, we derive their adjustment coefficients functions and the asymptotic distributions for the bivariate compound claim processes in finite time, we obtain the upper bounds for three types of ruin probabilities by large deviations theory, and we present examples to compute the three types of the adjustment coefficients for their corresponding ruin probabilities as well as the marginal VaR values.

However, there are many further works that can be done. We can extend the assumptions to the general ones that the bivariate claim seizes are copula distributed in common shocks, and the distributions of the claim sizes of two businesses are heavy-tailed distributed. Then the approximations to the three types of ruin probabilities should be discussed in another way.

References

  1. 1.

    Chan, W-S, Yang, H, Zhang, L: Some results on ruin probabilities in a two-dimensional risk model. Insur. Math. Econ. 32, 345-358 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  2. 2.

    Yuen, KC, Guo, J, Wu, X: On the first time of ruin in the bivariate compound Poisson model. Insur. Math. Econ. 38, 298-308 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  3. 3.

    Li, J, Liu, Z, Tang, Q: On the ruin probabilities of a bidimensional perturbed risk model. Insur. Math. Econ. 41, 185-195 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  4. 4.

    Avram, F, Palmowski, Z, Pistorius, M: A two-dimensional ruin problem on the positive quadrant. Insur. Math. Econ. 42, 227-234 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  5. 5.

    Badescu, A, Cheung, E, Rabehasaina, A: Two-dimensional risk model with proportional reinsurance. J. Appl. Probab. 48(3), 749-765 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  6. 6.

    Cai, J, Li, J: Dependence properties and bounds for ruin probabilities in multivariate compound risk models. J. Multivar. Anal. 98, 757-773 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  7. 7.

    Dang, L, Zhu, N, Zhang, H: Survival probability for a two-dimensional risk model. Insur. Math. Econ. 44, 491-496 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  8. 8.

    Chen, Y, Yuen, KC, Ng, KW: Asymptotics for the ruin probabilities of a two-dimensional renewal risk model with heavy-tailed claims. Appl. Stoch. Models Bus. Ind. 27, 290-300 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  9. 9.

    Al-Osh, MA, Alzaid, AA: First-order integer-valued autoregressive (\(\operatorname{INAR}(1)\)) process. J. Time Ser. Anal. 8, 261-275 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  10. 10.

    Al-Osh, MA, Alzaid, AA: Integer-valued moving average (INMA) process. Stat. Pap. 29, 281-300 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  11. 11.

    McKenzie, E: Autoregressive-moving average processes with negative binomial and geometric marginal distributions. Adv. Appl. Probab. 18, 679-705 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  12. 12.

    McKenzie, E: Some ARMA models for dependent sequences of Poisson counts. Adv. Appl. Probab. 20(4), 822-835 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  13. 13.

    Quoreshi, S: Bivariate time series modeling of financial count data. Commun. Stat., Theory Methods 35(7), 1343-1358 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  14. 14.

    Pedeli, X, Karlis, D: A bivariate \(\operatorname{INAR}(1)\) process with application. Stat. Model. 11(4), 325-349 (2011)

    Article  MathSciNet  Google Scholar 

  15. 15.

    Cossétte, H, Marceau, E, Maume-Deschamps, V: Discrete-time risk models based on time series for count random variables. ASTIN Bull. 40, 123-150 (2009)

    Article  Google Scholar 

  16. 16.

    Cossétte, H, Marceau, E, Toureille, F: Risk models based on time series for count random variables. Insur. Math. Econ. 48, 19-28 (2011)

    Article  MATH  Google Scholar 

  17. 17.

    Marshall, A, Olkin, I: A family of bivariate distributions generated by the bivariate Bernoulli distribution. J. Am. Stat. Assoc. 80, 332-338 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  18. 18.

    Nyrhinen, H: Rough descriptions of ruin for a general class of surplus processes. Adv. Appl. Probab. 30, 1008-1026 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  19. 19.

    Müller, A, Pflug, G: Asymptotic ruin probabilities for risk processes with dependent increments. Insur. Math. Econ. 28, 381-392 (2001)

    Article  MATH  Google Scholar 

  20. 20.

    Glynn, P, Whitt, W: Logarithm asymptotics for steady-state tail probabilities in a single-server queue. J. Appl. Probab. 31, 131-156 (1994)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is grateful to the referees for their useful and constructive suggestions. This work is supported by National Natural Science Foundation of China (Nos. 11271155, 11371168, 11001105, 11071126, 11071269), Specialized Research Fund for the Doctoral Program of Higher Education (No. 20110061110003), Scientific Research Fund of Jilin University (No. 201100011), and Jilin Province Natural Science Foundation (20130101066JC, 20130522102JH, 20101596).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Dehui Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Appendix

Appendix

The proof of Proposition 3.1

We have \(c_{n}(t,s)=\log m_{\mathbf{S}_{n}}(t,s)-n(\pi_{1}t+\pi_{2}s)\), where

$$ m_{\mathbf{S}_{n}}(t,s) = E\bigl[e^{tS_{1n}}e^{sS_{2n}} \bigr]. $$
(17)

According to Cossétte et al. [15], we have the expression for the m.g.f.

$$ m_{\mathbf{S}_{n}}(t,s) = \hat{P}_{\mathbf{N}_{(n)}} \bigl(m_{X}(t),m_{Y}(t) \bigr) $$
(18)

and

$$ \hat{P}_{\mathbf{N}_{(n)}}(t,s) = E\bigl[t^{N_{11}+N_{12}+\cdots+N_{1n}}s^{N_{21}+N_{22}+\cdots+N_{2n}} \bigr]. $$
(19)

So, we mainly focus on \(\hat{P}_{\mathbf{N}_{(n)}}(t,s)\), and we have

$$\begin{aligned}& E \bigl[t^{N_{11}+N_{12}+\cdots+N_{1n}}s^{N_{21}+N_{22}+\cdots +N_{2n}} \bigr] \\& \quad = E\bigl[ t^{\varepsilon_{11} +\sum_{j=1}^{\varepsilon_{10}}{\delta_{10j}^{(1)}} +\varepsilon_{12} +\sum_{j=1}^{\varepsilon_{11}}{\delta_{21j}^{(1)}} +\cdots +\varepsilon_{1n} +\sum_{j=1}^{\varepsilon_{1,n-1}}{\delta_{n,n-1,j}^{(1)}} } \\& \qquad {}\times s^{\varepsilon_{21} +\sum_{j=1}^{\varepsilon_{20}}{\delta_{10j}^{(2)}} +\varepsilon_{22} +\sum_{j=1}^{\varepsilon_{21}}{\delta_{21j}^{(2)}} +\cdots +\varepsilon_{2n} +\sum_{j=1}^{\varepsilon_{2,n-1}}{\delta_{n,n-1,j}^{(2)}}}\bigr] \\& \quad = E \bigl[ t^{\sum_{j=1}^{\varepsilon_{10}}{\delta_{10j}^{(1)}}} s^{\sum_{j=1}^{\varepsilon_{20}}{\delta_{10j}^{(2)}}} \bigr]\times E \bigl[t^{\varepsilon_{1n}}s^{\varepsilon_{2n}} \bigr] \\& \qquad {}\times \prod_{i=1}^{n-1} { \bigl\{ E \bigl[ t^{\varepsilon_{1i}+\sum_{j=1}^{\varepsilon_{1i}}{\delta_{i+1,i,j}^{(1)}}} s^{\varepsilon_{2i}+\sum_{j=1}^{\varepsilon_{2i}}{\delta_{i+1,i,j}^{(2)}}} \bigr] \bigr\} }. \end{aligned}$$
(20)

The first item can be calculated as

$$\begin{aligned}& E \bigl[ t^{\sum_{j=1}^{\varepsilon_{10}}{\delta_{10j}^{(1)}}} s^{\sum_{j=1}^{\varepsilon_{20}}{\delta_{10j}^{(2)}}} \bigr] \\ & \quad = \exp\bigl\{ \lambda_{1}(\alpha_{1}t+\bar{ \alpha}_{1}-1) + \lambda_{2}(\alpha_{2}s+\bar{ \alpha}_{2}-1)+ \lambda\bigl[(\alpha_{1}t+\bar{ \alpha}_{1}) (\alpha_{2}s+\bar{\alpha}_{2})-1 \bigr] \bigr\} , \end{aligned}$$
(21)

and the second item in (20) can easily be found,

$$ E\bigl[t^{\varepsilon_{1n}}s^{\varepsilon_{2n}}\bigr] =\exp\bigl\{ \lambda_{1}(t-1)+\lambda_{2}(s-1)+\lambda(ts-1)\bigr\} . $$
(22)

Similarly, we can calculate the last item in (20),

$$\begin{aligned}& E \bigl[ t^{\varepsilon_{1i}+\sum_{j=1}^{\varepsilon_{1i}}{\delta_{i+1,i,j}^{(1)}}} s^{\varepsilon_{2i}+\sum_{j=1}^{\varepsilon_{2i}}{\delta_{i+1,i,j}^{(2)}}} \bigr] \\ & \quad = \exp\bigl\{ \lambda_{1}\bigl(\alpha_{1}t^{2}+ \bar{\alpha}_{1}t-1\bigr) + \lambda_{2}\bigl( \alpha_{2}s^{2}+\bar{\alpha}_{2}s-1\bigr) \\ & \qquad {}+ \lambda \bigl[\bigl(\alpha_{1}t^{2}+\bar{\alpha}_{1}t \bigr) \bigl(\alpha_{2}s^{2}+\bar{\alpha}_{2}s \bigr)-1\bigr] \bigr\} . \end{aligned}$$
(23)

Substituting (21), (23), and (22) into (20), we have

$$\begin{aligned} \hat{P}_{\mathbf{N}_{(n)}}(t,s) =& \exp\bigl\{ \lambda_{1} \bigl[(n-1) \bigl(\alpha_{1}t^{2}+\bar{\alpha}_{1}t \bigr)+(1+\alpha_{1})t+\bar{\alpha} _{1}-(n+1)\bigr]\bigr\} \\ &{}\times \exp\bigl\{ \lambda_{2}\bigl[(n-1) \bigl( \alpha_{2}s^{2}+\bar{\alpha}_{2}s\bigr)+(1+ \alpha_{2})s+\bar{\alpha}_{2}-(n+1)\bigr] \bigr\} \\ &{}\times \exp\bigl\{ \lambda \bigl[ (n-1) \bigl(\alpha_{1}t^{2}+ \bar{\alpha}_{1}t\bigr) \bigl(\alpha_{2}s^{2}+ \bar{\alpha}_{2}s\bigr) \\ &{}+(\alpha_{1}t+\bar{\alpha}_{1}) ( \alpha_{2}s+\bar{\alpha}_{2})+ts-(n+1) \bigr] \bigr\} . \end{aligned}$$
(24)

As for (20), we get the m.g.f. of \(\operatorname{BPMA}(1)\) as

$$\begin{aligned} m_{\mathbf{S}_{n}}(t,s) =& \exp \bigl\{ \lambda_{1} \bigl[(n-1) \bigl(\alpha_{1}m_{X}^{2}(t)+\bar{ \alpha}_{1}m_{X}(t)\bigr)+(1+\alpha_{1})m_{X}(t)+ \bar{\alpha}_{1}-(n+1)\bigr]\bigr\} \\ &{}\times \exp\bigl\{ \lambda_{2}\bigl[(n-1) \bigl( \alpha_{2}m_{Y}^{2}(s)+\bar{\alpha}_{2}m_{Y}(s) \bigr)+(1+\alpha_{2})m_{Y}(s)+\bar{\alpha}_{2}-(n+1) \bigr]\bigr\} \\ &{}\times \exp\bigl\{ \lambda\bigl[(n-1) \bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(\alpha_{2}m_{Y}^{2}(s)+ \bar{\alpha} _{2}m_{Y}(s)\bigr) \\ &{}\times\bigl(\alpha_{1}m_{X}(t)+\bar{ \alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}(s)+\bar{ \alpha} _{2}\bigr)+m_{X}(t)m_{Y}(s)-(n+1)\bigr] \bigr\} . \end{aligned}$$
(25)

Then

$$ c(t,s)=\lim_{n\rightarrow\infty}\frac{1}{n}\log m_{\mathbf {S}_{n}}(t,s)-(\pi_{1}t+\pi_{2}s). $$
(26)

The proof of Proposition 3.3

We firstly concentrate on the case \(0<\alpha_{1},\alpha_{2}<1\). The other special cases can be proved analogously. The p.g.f. of \(\mathbf {N}_{(n)}\) with base number \((t,s)\) is

$$\begin{aligned} \hat{P}_{\mathbf{N}_{(n)}}(t,s) =& E\bigl[t^{N_{11}+N_{12}+\cdots+N_{1n}}s^{N_{21}+N_{22}+\cdots+N_{2n}} \bigr] \\ =& E\bigl[ t^{\sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}} + \sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}\delta_{20j}^{(1)}} +\cdots+ \sum_{j=1}^{N_{10}}{\prod_{m=1}^{n}\delta_{m0j}^{(1)}}} \\ &{}\times s^{\sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}} + \sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}\delta_{20j}^{(2)}} +\cdots+ \sum_{j=1}^{N_{20}}{\prod_{m=1}^{n}\delta_{m0j}^{(1)}}} \bigr] \\ &{}\times E\bigl[ t^{\varepsilon_{11}+\sum_{j=1}^{\varepsilon_{11}}{\delta_{21j}^{(1)}} + \sum_{j=1}^{\varepsilon_{11}}{\delta_{21j}^{(1)}\delta_{31j}^{(1)}} +\cdots+ \sum_{j=1}^{\varepsilon_{11}}{\prod_{m=2}^{n}\delta_{m1j}^{(1)}} } \\ &{}\times s^{\varepsilon_{21}+\sum_{j=1}^{\varepsilon_{21}}{\delta_{21j}^{(2)}} +\sum_{j=1}^{\varepsilon_{21}}{\delta_{21j}^{(2)}\delta_{31j}^{(2)}} +\cdots+\sum_{j=1}^{\varepsilon_{21}}{\prod_{m=2}^{n}\delta_{m1j}^{(2)}} }\bigr] \\ &{}\times \cdots \\ &{}\times E \bigl[ t^{\varepsilon_{1,n-1}+\sum_{j=1}^{\varepsilon_{1,n-1}}{\delta_{n,n-1,j}^{(1)}}} \times s^{\varepsilon_{2,n-1}+\sum_{j=1}^{\varepsilon_{2,n-1}}{\delta_{n,n-1,j}^{(2)}}} \bigr] \\ &{}\times E \bigl[t^{\varepsilon_{1,n}}s^{\varepsilon_{2,n}} \bigr]. \end{aligned}$$
(27)

The first item of (27) can be calculated as follows:

$$\begin{aligned}& E\bigl[ t^{\sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}} + \sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}\delta_{20j}^{(1)}} + \cdots + \sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}\delta_{20j}^{(1)} \cdots \delta_{n0j}^{(1)}}} \\& \qquad {}\times s^{\sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}} + \sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}\delta_{20j}^{(2)}} + \cdots + \sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}\delta_{20j}^{(2)} \cdots \delta_{n0j}^{(2)}} }\bigr] \\& \quad = \sum_{n_{1}=0}^{\infty}\sum _{n_{2}=0}^{\infty} \Biggl\{ \sum _{k_{11}=0}^{n_{1}} {\left ( \begin{array}{@{}c@{}}n_{1} \\ k_{11} \end{array} \right ) t^{k_{11}}\alpha_{1}^{k_{11}}\bar{ \alpha}_{1}^{n_{1}-k_{11}}} \sum_{k_{12}=0}^{k_{11}} \left ( \begin{array}{@{}c@{}} k_{11} \\ k_{12} \end{array} \right ) t^{k_{12}} \alpha_{1}^{k_{12}}\bar{\alpha}_{1}^{k_{11}-k_{12}} \\& \qquad {}\times \cdots \\& \qquad {}\times\sum_{k_{1n}=0}^{k_{1,n-1}} \left ( \begin{array}{@{}c@{}} k_{1,n-1} \\ k_{1n} \end{array} \right ) t^{k_{1n}} \alpha_{1}^{k_{1n}}\bar{\alpha}_{1}^{k_{1,n-1}-k_{1n}} \Biggr\} \\& \qquad {}\times \Biggl\{ \sum_{k_{21}=0}^{n_{2}} \left ( \begin{array}{@{}c@{}} n_{2} \\ k_{21} \end{array} \right ) s^{k_{21}} \alpha_{2}^{k_{21}}\bar{\alpha}_{2}^{n_{2}-k_{21}} \sum_{k_{22}=0}^{k_{21}} \left ( \begin{array}{@{}c@{}} k_{21} \\ k_{22} \end{array} \right ) s^{k_{22}} \alpha_{2}^{k_{22}}\bar{\alpha}_{2}^{k_{21}-k_{22}} \\& \qquad {}\times\cdots \\& \qquad {}\times\sum_{k_{2n}=0}^{k_{2,n-1}} \left ( \begin{array}{@{}c@{}} k_{2,n-1} \\ k_{2n} \end{array} \right ) s^{k_{2n}} \alpha_{2}^{k_{2n}}\bar{\alpha}_{2}^{k_{2,n-1}-k_{2n}} \Biggr\} \\& \qquad {}\times P(N_{10}=n_{1},N_{20}=n_{2}) \\& \quad = \sum_{n_{1}=0}^{\infty}\sum _{n_{2}=0}^{\infty} \bigl\{ \alpha_{1}t\bigl[ \alpha_{1}t\bigl(\ldots\alpha_{1}t(\alpha_{1}t+ \bar{\alpha}_{1})+\bar{\alpha} _{1}+\cdots\bigr)+\bar{ \alpha}_{1}\bigr]+\bar{\alpha}_{1} \bigr\} ^{n_{1}} \\& \qquad {}\times \bigl\{ \alpha_{2}s\bigl[\alpha_{2}s\bigl( \ldots\alpha_{2}s(\alpha_{2}s+\bar{\alpha}_{2})+ \bar{\alpha} _{2}+\cdots\bigr)+\bar{\alpha}_{2}\bigr]+\bar{ \alpha}_{2} \bigr\} ^{n_{2}} \\& \qquad {}\times P(N_{10}=n_{1},N_{20}=n_{2}) \\& \quad = \exp \bigl\{ \lambda_{1}\bigl[A_{n}(t)-1\bigr]+ \lambda_{1}\bigl[B_{n}(s)-1\bigr]+\lambda \bigl[A_{n}(t)B_{n}(s)-1\bigr] \bigr\} . \end{aligned}$$
(28)

Here \(A_{n}(t)\) and \(B_{n}(s)\) are the polynomials ranked n with recursive formulas

$$\begin{aligned}& A_{0}(t)=1, \qquad B_{0}(s)=1, \\& A_{1}(t)=\alpha_{1}tA_{0}(t)+\bar{ \alpha}_{1}, \qquad B_{1}(s)=\alpha_{2}sB_{0}(s)+ \bar{\alpha}_{2}, \\& A_{i}(t)=\alpha_{1}tA_{i-1}(t)+\bar{ \alpha}_{1}, \qquad B_{i}(s)=\alpha _{2}sB_{i-1}(s)+ \bar{\alpha}_{2} \end{aligned}$$
(29)

for \(i=2,3,\ldots,n\).

Analogously, we can calculate the other items of (27) in the same way,

$$\begin{aligned}& E\bigl[ t^{ \varepsilon_{11}+\sum_{j=1}^{\varepsilon_{11}}{\delta_{21j}^{(1)}} + \sum_{j=1}^{\varepsilon_{11}}{\delta_{21j}^{(1)}\delta_{31j}^{(1)}} + \cdots + \sum_{j=1}^{\varepsilon_{11}}{\delta_{21j}^{(1)}\delta_{31j}^{(1)} \cdots \delta_{n1j}^{(1)}} } \\& \qquad {}\times s^{ \varepsilon_{21} + \sum_{j=1}^{\varepsilon_{21}}{\delta_{21j}^{(2)}} + \sum_{j=1}^{\varepsilon_{21}}{\delta_{21j}^{(2)}\delta_{31j}^{(2)}} + \cdots + \sum_{j=1}^{\varepsilon_{21}}{\delta_{21j}^{(2)}\delta_{31j}^{(2)} \cdots \delta_{n1j}^{(2)}} } \bigr] \\& \quad = \sum_{n_{1}=0}^{\infty}\sum _{n_{2}=0}^{\infty} {t^{n_{1}} \bigl\{ \alpha_{1}t\bigl[\alpha_{1}t\bigl(\cdots \alpha_{1}t(\alpha_{1}t+\bar{\alpha}_{1})+ \bar{\alpha} _{1}+\cdots\bigr)+\bar{\alpha}_{1}\bigr] + \bar{\alpha}_{1} \bigr\} ^{n_{1}} } \\& \qquad {}\times s^{n_{2}} \bigl\{ \alpha_{2}s\bigl[ \alpha_{2}s\bigl(\cdots\alpha_{2}s(\alpha_{2}s+\bar{ \alpha}_{2})+\bar{\alpha} _{2}+\cdots\bigr)+\bar{ \alpha}_{2}\bigr] +\bar{\alpha}_{2} \bigr\} ^{n_{2}} \\& \qquad {}\times P(\varepsilon_{11}=n_{1}, \varepsilon_{21}=n_{2}) \\& \quad = \exp \bigl\{ \lambda_{1}\bigl[tA_{n-1}(t)-1\bigr] + \lambda_{1}\bigl[sB_{n-1}(s)-1\bigr] + \lambda \bigl[tsA_{n-1}(t)B_{n-1}(s)-1\bigr] \bigr\} . \end{aligned}$$
(30)

For the kth item we have

$$\begin{aligned}& E\bigl[ t^{\varepsilon_{1k} + \sum_{j=1}^{\varepsilon_{1k}}{\delta_{k+1,k,j}^{(1)}} + \cdots + \sum_{j=1}^{\varepsilon_{1k}} {\delta_{k+1,k,j}^{(1)}\delta_{k+2,k,j}^{(1)}\cdots\delta _{n,k,j}^{(1)}}} \\& \qquad {}\times s^{\varepsilon_{2k} + \sum_{j=1}^{\varepsilon_{2k}}{\delta_{k+1,k,j}^{(2)}} + \cdots + \sum_{j=1}^{\varepsilon_{2k}} {\delta_{k+1,k,j}^{(2)}\delta_{k+2,k,j}^{(2)}\cdots\delta_{n,k,j}^{(2)}}}\bigr] \\& \quad = \exp \bigl\{ \lambda_{1}\bigl(tA_{n-k}(t)-1\bigr) + \lambda_{2}\bigl(sB_{n-k}(s)-1\bigr) + \lambda \bigl(tsA_{n-k}(t)B_{n-k}(s)-1\bigr) \bigr\} . \end{aligned}$$
(31)

The second last item is

$$\begin{aligned}& E \bigl[ t^{\varepsilon_{1,n-1}+\sum_{j=0}^{\varepsilon_{1,n-1}}{\delta_{n,n-1,j}^{(1)}}} s^{\varepsilon_{2,n-1}+\sum_{j=0}^{\varepsilon_{2,n-1}}{\delta_{n,n-1,j}^{(2)}}} \bigr] \\& \quad = \exp \bigl\{ \lambda_{1}\bigl(tA_{1}(t)-1\bigr) + \lambda_{2}\bigl(sB_{1}(s)-1\bigr) + \lambda \bigl(tsA_{1}(t)B_{1}(s)-1\bigr) \bigr\} \end{aligned}$$
(32)

and the last item is

$$ E \bigl[t^{\varepsilon_{1n}}s^{\varepsilon_{2n}} \bigr] = \exp \bigl\{ \lambda_{1}\bigl(tA_{0}(t)-1\bigr) + \lambda_{2} \bigl(sB_{0}(s)-1\bigr) + \lambda\bigl(tsA_{0}(t)B_{0}(s)-1 \bigr) \bigr\} . $$
(33)

Combining (28)(33), the p.g.f. of the \(\operatorname{BPAR}(1)\) process is

$$\begin{aligned} \hat{P}_{\mathbf{N}_{(n)}}(t,s) =& \exp\Biggl\{ \lambda_{1}\Biggl[A_{n}(t)+t\sum _{i=0}^{n-1}{A_{i}(t)}-(n+1)\Biggr]\Biggr\} \\ &{}\times \exp\Biggl\{ \lambda_{2}\Biggl[B_{n}(s)+s\sum _{i=0}^{n-1}{B_{i}(s)}-(n+1) \Biggr]\Biggr\} \\ &{}\times \exp\Biggl\{ \lambda\Biggl[A_{n}(t)B_{n}(s)+ts\sum _{i=0}^{n-1}{A_{i}(t)B_{i}(s)}-(n+1) \Biggr]\Biggr\} \\ =& \exp\biggl\{ \lambda_{1}\biggl[\frac{t(1-A_{n}(t)+n\bar{\alpha}_{1})}{1-\alpha _{1}t}+A_{n}(t)-(n+1) \biggr]\biggr\} \\ &{}\times \exp\biggl\{ \lambda_{2}\biggl[\frac{s(1-B_{n}(s)+n\bar{\alpha}_{2})}{1-\alpha _{2}s}+B_{n}(s)-(n+1) \biggr]\biggr\} \\ &{}\times \exp\biggl\{ \lambda\biggl[\frac{ts}{1-\alpha_{1}\alpha _{2}ts}L(t,s)+A_{n}(t)B_{n}(s)-(n+1) \biggr]\biggr\} , \end{aligned}$$
(34)

where

$$\begin{aligned} L(t,s) =& 1-A_{n-1}(t)B_{n-1}(s) + \frac {\alpha_{1}\bar{\alpha}_{2}t(1-A_{n-1}(t)+(n-1)\bar{\alpha}_{1})}{ 1-\alpha_{1}t} \\ &{}+ \frac {\bar{\alpha}_{1}\alpha_{2}s(1-B_{n-1}(s)+(n-1)\bar{\alpha}_{2})}{ 1-\alpha_{2}s} +(n-1)\bar{\alpha}_{1}\bar{ \alpha}_{2}. \end{aligned}$$

Then the m.g.f. of \(\operatorname{BPAR}(1)\) is

$$\begin{aligned} m_{\mathbf{S}_{n}}(t,s) =& \exp \biggl\{ \lambda_{1} \biggl[ \frac {m_{X}(t) (1-A_{n}\{m_{X}(t)\}+n\bar{\alpha}_{1} )}{ 1-\alpha_{1}m_{X}(t)} + A_{n}\bigl\{ m_{X}(t)\bigr\} - (n+1) \biggr] \biggr\} \\ &{}\times \exp \biggl\{ \lambda_{2} \biggl[ \frac{m_{Y}(s) (1-B_{n}\{m_{Y}(s)\}+n\bar{\alpha}_{2} )}{1-\alpha_{2}m_{Y}(s)} +B_{n}\bigl\{ m_{Y}(s)\bigr\} -(n+1) \biggr] \biggr\} \\ &{}\times \exp \biggl\{ \lambda \biggl[\frac{m_{X}(t)m_{Y}(t)L\{m_{X}(t),m_{Y}(s)\}}{1-\alpha_{1}\alpha_{2}m_{X}(t)m_{Y}(t)} \\ &{}+A_{n}\bigl\{ m_{X}(t)\bigr\} B_{n}\bigl\{ m_{Y}(s)\bigr\} -(n+1) \biggr]\biggr\} . \end{aligned}$$
(35)

For the polynomials \(A_{i}\{m_{X}(t)\}\) and \(B_{i}\{m_{Y}(s)\}\), \(i=1,2\ldots ,n\), recalling the recursive formulas in (29), we have

$$\begin{aligned}& A_{i}(t) = (\alpha_{1}t)^{i} +\bar{ \alpha}_{1}(\alpha_{1}t)^{i-1} +\bar{ \alpha}_{1}(\alpha_{1}t)^{i-1} +\cdots +\bar{ \alpha}_{1}(\alpha_{1}t) +\bar{\alpha}_{1} \\& \hphantom{A_{i}(t)} = \frac{\bar{\alpha}_{1}+\alpha_{1}(\alpha_{1}t)^{i}-(\alpha_{1}t)^{i+1}}{1-\alpha _{1}t}; \\& B_{i}(s) = (\alpha_{2}s)^{i} +\bar{ \alpha}_{2}(\alpha_{2}s)^{i-1} +\bar{ \alpha}_{2}(\alpha_{2}s)^{i-1} +\cdots +\bar{ \alpha}_{2}(\alpha_{2}s) +\bar{\alpha}_{2} \\& \hphantom{B_{i}(s)} = \frac{\bar{\alpha}_{2}+\alpha_{2}(\alpha_{2}s)^{i}-(\alpha_{2}s)^{i+1}}{1-\alpha_{2}s}. \end{aligned}$$

Then, if \(\alpha_{1}m_{X}(t)<1\) and \(\alpha_{2}m_{Y}(s)<1\), we have \(A_{i}\{m_{X}(t)\}<\infty\) and \(B_{i}\{m_{Y}(s)\}<\infty\) for \(i=0,1,2,\ldots\) .

For special situations, if \(\alpha_{1}=0\) and \(\alpha_{2}>0\), then \(A_{n}(t)\equiv1\), and so for the \(A_{k}(t)\)’s, we have

$$\begin{aligned}& A_{n}(t)B_{n}(s)+ts\sum_{k=0}^{n-1}{A_{k}(t)B_{k}(s)} = B_{n}(s)+ts\sum_{k=0}^{n-1}{B_{k}(s)}, \\& \frac{ts}{1-\alpha_{1}\alpha _{2}ts}L(t,s)+A_{n}(t)B_{n}(s) = B_{n}(s)+\frac{ts(1-B_{n}(s)+n\bar{\alpha} _{2})}{1-\alpha_{2}s}. \end{aligned}$$

We also can get a symmetrical expression for the \(\alpha_{1}>0\) and \(\alpha_{2}=0\) cases.

The proof of Theorem 4.1

The proof processes proceed in a similar way to the ones presented by Glynn and Whitt. Let \(\boldsymbol{\xi}_{i}=(W_{1i}-\pi_{1},W_{2i}-\pi_{2})\), \(i=1,2,\ldots\) , and \(\mathbf{S}'_{n}=\sum_{i=1}^{n}{\boldsymbol{\xi}_{i}}\) be the net bivariate aggregate losses of n periods. Before the contexts of proof, we make the bivariate Legendre-Fenchel transformation on bivariate r.v.’s \(\{ \mathbf{S}'_{i}, i=1,2,\ldots\}\),

$$ \tilde{P}_{n,\mathbf{r}} (A ) = \int_{A} e^{\langle\mathbf{r},\mathbf{S}'_{n}\rangle-c_{n}(\mathbf{r})} \, dP_{n} \bigl(\mathbf{S}'_{n} \in A \bigr), $$

where A is a (measurable) compact subset of \({R}^{2}\); \(\langle\cdot ,\cdot\rangle\) means the standard Euclidean scalar product, and \(c_{n}(\mathbf{r})=\log{E[e^{\langle\mathbf{r},\mathbf{S}'_{n}\rangle}]}\) is the c.g.f. of \(\mathbf{S}'_{n}\) with the parameter vector r.

We here define the vector partial order for two vectors \(\mathbf {X}=(X_{1},X_{2})\) and \(\mathbf{Y}=(Y_{1},Y_{2})\in{R}^{2}\), \(\mathbf{X}\geq \mathbf{{Y}}\) if and only if \(X_{1}\geq Y_{1}\) and \(X_{2}\geq Y_{2}\), otherwise \(\mathbf{X}\geq\mathbf{Y}\) does not hold.

Lemma A.1

Let \(\tilde{\mu}=(\frac{dc(t,s)}{dt},\frac {dc(t,s)}{ds})|_{t=t^{*},s=s^{*}}\), where \((t^{*},s^{*})\) is defined in Theorem  4.1. We write \(\mathbf{r}^{*}=(t^{*},s^{*})\), then, for each \(\eta>0\), there exist \(z\in[0,1)\) and \(n_{0}\) such that

$$\begin{aligned}& \tilde{P}_{n,\mathbf{r}^{*}} \biggl(\biggl\vert \frac{\mathbf{S}'_{n}}{n}-\tilde{ \boldsymbol{\mu}}\biggr\vert >\eta \biggr)\leq z^{n}, \end{aligned}$$
(36)
$$\begin{aligned}& \tilde{P}_{n,\mathbf{r}^{*}} \biggl(\biggl\vert \frac{\mathbf {S}'_{n-k}}{n}-\tilde{ \boldsymbol{\mu}}\biggr\vert >\eta \biggr)\leq z^{n} \end{aligned}$$
(37)

for \(n\geq n_{0}\) and any given \(k>0\).

Proof

For (36), since \(D=\{\mathbf{S}'_{n};|\mathbf{S}'_{n}/n-\tilde{\mu }|>\eta\}=D_{1}+D_{2}+D_{3}+D_{4}\), where

$$\begin{aligned}& D_{1} = \bigl\{ \mathbf{S}'_{n};\bigl\vert \mathbf{S}'_{n}/n-\tilde{\boldsymbol{\mu}}\bigr\vert > \eta ,S'_{1n}/n\geq\tilde{\mu}_{1},S'_{2n}/n \geq\tilde{\mu}_{2}\bigr\} ; \\& D_{2} = \bigl\{ \mathbf{S}'_{n};\bigl\vert \mathbf{S}'_{n}/n-\tilde{\boldsymbol{\mu}}\bigr\vert > \eta ,S'_{1n}/n\leq\tilde{\mu}_{1},S'_{2n}/n \geq\tilde{\mu}_{2}\bigr\} ; \\& D_{3} = \bigl\{ \mathbf{S}'_{n};\bigl\vert \mathbf{S}'_{n}/n-\tilde{\boldsymbol{\mu}}\bigr\vert > \eta ,S'_{1n}/n\leq\tilde{\mu}_{1},S'_{2n}/n \leq\tilde{\mu}_{2}\bigr\} ; \\& D_{4} = \bigl\{ \mathbf{S}'_{n};\bigl\vert \mathbf{S}'_{n}/n-\tilde{\boldsymbol{\mu}}\bigr\vert > \eta ,S'_{1n}/n\geq\tilde{\mu}_{1},S'_{2n}/n \leq\tilde{\mu}_{2}\bigr\} ; \end{aligned}$$

we only prove the inequality in \(D_{1}\), the result will be proved similarly in the other segments.

As for \(D_{1}\), let \(\boldsymbol{\iota}=(\iota_{1},\iota_{2})>(0,0)\) such that \(\langle\boldsymbol{\iota},\tilde{\mu}\rangle>0\). We also can find a vector \(\boldsymbol{\omega}=(\omega_{1},\omega_{2})\geq(0,0)\) such that \(S'_{1n}/n-\tilde{\mu}_{1}\geq\omega_{1}\eta\) and \(S_{2n}/n-\tilde{\mu }_{2}\geq\omega_{2}\eta\).

Write \(v=\min\{\omega_{1},\omega_{2}\}\) and \(\mathbf{J}=(1,1)\), we have

$$\begin{aligned} \tilde{P}_{n,\mathbf{r}^{*}}(D_{1}) =& \int_{D_{1}}{ e^{\langle\mathbf{r}^{*},\mathbf{S}'_{n}\rangle-c_{n}(\mathbf{r}^{*})} \, dP_{n} \bigl(\mathbf{S}'_{n} \in D_{1}\bigr)} \\ \leq& \int_{D_{1}}{ e^{ \langle\mathbf{r}^{*},\mathbf{S}'_{n}\rangle-c_{n}(\mathbf{r}^{*}) + \langle\boldsymbol{\iota},\mathbf{S}'_{n}-n\tilde{\boldsymbol{\mu}}-nv\eta\mathbf {J}\rangle}\, dP_{n} \bigl(\mathbf{S}'_{n}\in D_{1}\bigr)} \\ =& e^{-n\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf{J}\rangle} \int_{D_{1}}{ e^{\langle\mathbf{r}^{*}+\boldsymbol{\iota},\mathbf{S}'_{n}\rangle-c_{n}(\mathbf{r}^{*})} \, dP_{n} \bigl(\mathbf{S}'_{n}\in D_{1}\bigr)} \\ \leq& e^{-n\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf {J}\rangle} \int_{R^{2}}{ e^{\langle\mathbf{r}^{*}+\boldsymbol{\iota},\mathbf{S}'_{n}\rangle -c_{n}(\mathbf{r}^{*})}\, dP_{n} \bigl(\mathbf{S}'_{n}\in{R}^{2} \bigr) } \\ =& e^{-n\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf{J}\rangle} \times e^{c_{n}(\mathbf{r}^{*}+\boldsymbol{\iota})-c_{n}(\mathbf{r}^{*})}. \end{aligned}$$

Hence by (a) and (b) of Lemma 4.2,

$$ \limsup_{n\rightarrow\infty} \frac{1}{n} \log\tilde{P}_{n,\mathbf{r}^{*}} (\mathbf{S}_{n}/n>\tilde{\mu}+v\eta\mathbf{J}) \leq c\bigl(\boldsymbol{ \iota}+\mathbf{r}^{*}\bigr)-c\bigl(\mathbf{r}^{*}\bigr) - \langle\boldsymbol{\iota}, \tilde{\boldsymbol{\mu}}\rangle - v\eta\langle\boldsymbol{\iota},\mathbf{J} \rangle, $$

and by the Taylor expansion, the right-handed side is of order \(-v\eta \langle\boldsymbol{\iota},\mathbf{J}\rangle+o(|\boldsymbol{\iota}|)\), as \(\boldsymbol{\iota}\downarrow(0,0)\), (36) is proved.

For \(\mathbf{S}'_{n-k}\), similarly we have \(D'=\{\mathbf {S}'_{n-k};|\mathbf{S}'_{n-k}/n-\tilde{\boldsymbol{\mu}}|>\eta\} =D_{1}'+D_{2}'+D_{3}'+D_{4}'\), where

$$\begin{aligned}& D_{1}' = \bigl\{ \mathbf{S}'_{n-k}; \bigl\vert \mathbf{S}'_{n-k}/n-\tilde{\boldsymbol{\mu }} \bigr\vert >\eta,S'_{1,n-k}/n\geq\tilde{ \mu}_{1},S'_{2,n-k}/n\geq\tilde{ \mu}_{2}\bigr\} ; \\& D_{2}' = \bigl\{ \mathbf{S}'_{n-k}; \bigl\vert \mathbf{S}'_{n-k}/n-\tilde{\boldsymbol{\mu }} \bigr\vert >\eta,S'_{1,n-k}/n\leq\tilde{ \mu}_{1},S'_{2,n-k}/n\geq\tilde{ \mu}_{2}\bigr\} ; \\& D_{3}' = \bigl\{ \mathbf{S}'_{n-k}; \bigl\vert \mathbf{S}'_{n-k}/n-\tilde{\boldsymbol{\mu }} \bigr\vert >\eta,S'_{1,n-k}/n\leq\tilde{ \mu}_{1},S'_{2,n-k}/n\leq\tilde{ \mu}_{2}\bigr\} ; \\& D_{4}' = \bigl\{ \mathbf{S}'_{n-k}; \bigl\vert \mathbf{S}'_{n-k}/n-\tilde{\boldsymbol{\mu }} \bigr\vert >\eta,S'_{1,n-k}/n\geq\tilde{ \mu}_{1},S'_{2,n-k}/n\leq\tilde{ \mu}_{2}\bigr\} ; \end{aligned}$$

we only prove the result in \(D_{1}'\), it can be proved true in the other segments similarly. The variables ι, v, η, and J are still not changed for \(D_{1}'\). We have

$$\begin{aligned} \tilde{P}_{n,\mathbf{r}^{*}}\bigl(D_{1}'\bigr) \leq& e^{-(n-k)\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf {J}\rangle} \int_{D_{1}'}e^{\langle\mathbf{r}^{*},\mathbf{S}'_{n}\rangle-c_{n}(\mathbf{r}^{*})} e^{\langle\boldsymbol{\iota},\mathbf{S}'_{n}\rangle -\langle\boldsymbol{\iota},\boldsymbol{\xi}_{n-k+1}+\cdots+\boldsymbol{\xi}_{n}\rangle} \, dP_{n} \bigl(\mathbf{S}'_{n-k} \in D_{1}' \bigr) \\ \leq& e^{-(n-k)\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf{J}\rangle} E \bigl[ e^{\langle\boldsymbol{\iota}+\mathbf{r}^{*},\mathbf{S}'_{n}\rangle -\langle\boldsymbol{\iota},\boldsymbol{\xi}_{n-k+1}+\cdots+\boldsymbol{\xi}_{n}\rangle -c_{n}(\mathbf{r}^{*})} \bigr] \\ \leq& e^{-(n-k)\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf {J}\rangle-c_{n}(\mathbf{r}^{*})} \bigl\{ E \bigl[e^{p\langle\boldsymbol{\iota}+\mathbf{r}^{*},\mathbf{S}'_{n}\rangle } \bigr] \bigr\} ^{1/p} \bigl\{ E \bigl[e^{-q\langle\boldsymbol{\iota},\boldsymbol{\xi}_{n-k+1}+\cdots+\boldsymbol {\xi}_{n}\rangle} \bigr] \bigr\} ^{1/q} \\ \leq& e^{-(n-k)\langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta\mathbf {J}\rangle-c_{n}(\mathbf{r}^{*})} e^{c_{n} (p(\boldsymbol{\iota}+\mathbf{r}^{*}) )/p} \bigl\{ E \bigl[e^{-q\langle\boldsymbol{\iota},\boldsymbol{\xi}_{n-k+1}+\cdots+\boldsymbol {\xi}_{n}\rangle} \bigr] \bigr\} ^{1/q}, \end{aligned}$$

where we use Hölder’s inequality with \(1/p+1/q=1\). Making \(p\uparrow1\), such that \(\vert p(\boldsymbol{\iota}+\mathbf{r}^{*})-\mathbf {r}^{*}\vert \downarrow0\) and \(\vert q\boldsymbol{\iota} \vert \downarrow0\). Since \(\{E [e^{-q\langle\boldsymbol{\iota},\boldsymbol{\xi }_{n-k+1}+\cdots+\boldsymbol{\xi}_{n}\rangle} ] \}^{1/q}\) is bounded for large n due to \(\mathbf{S}'_{n}\) is stationary, we get

$$ \limsup_{n\rightarrow\infty} \frac{1}{n} \log\tilde{P}_{n,\mathbf{r}^{*}} \bigl(\mathbf{S}'_{n-k}/n>\tilde{ \boldsymbol{\mu}}+v\eta \mathbf{J} \bigr) \leq - \langle\boldsymbol{\iota},\tilde{\boldsymbol{\mu}}+v\eta \mathbf{J}\rangle + c \bigl(p\bigl(\boldsymbol{\iota}+\mathbf{r}^{*}\bigr) \bigr)/p - c\bigl(\mathbf{r}^{*}\bigr), $$

and by a Taylor expansion it is easy to see that the right-handed side can be chosen strictly negative by taking p close enough to 1 and ι close enough to \((0,0)\). □

The proof of Theorem 4.1

Let \(\mathbf{u}=(u_{1},u_{2})\), we first show that \(\liminf_{u_{1}\rightarrow \infty,u_{2}\rightarrow\infty}\{\log\Psi_{\mathrm{max}}(\mathbf{u})+ \langle\mathbf{r}^{*},\mathbf{u}\rangle\}\geq0\). Let \(\eta>0\) be given and let \(m=m(\eta)=\max \{ \lfloor(1+\eta)\frac{u_{1}}{\tilde{\mu }_{1}} \rfloor, \lfloor(1+\eta)\frac{u_{2}}{\tilde{\mu}_{2}} \rfloor \}+1\) such that \(\{\mathbf{S}'_{m};\mathbf{S}'_{m}\geq\mathbf{u}\}\supseteq\{\mathbf {S}'_{m};\mathbf{S}'_{m}\geq\frac{m\tilde{\boldsymbol{\mu}}}{1+\eta}\}\).

Then

$$\begin{aligned} \Psi_{\mathrm{max}}(\mathbf{u}) \geq& P \bigl(\mathbf{S}'_{m} \geq\mathbf{u} \bigr) =\tilde{E}_{m,\mathbf{r}^{*}} \bigl[ e^{-\langle\mathbf{r}^{*},\mathbf{S}'_{m}\rangle+c_{m}(\mathbf{r}^{*})};\mathbf {S}'_{m}\geq\mathbf{u} \bigr] \\ \geq& \tilde{E}_{m,\mathbf{r}^{*}} \biggl[ e^{-\langle\mathbf{r}^{*},\mathbf{S}'_{m}\rangle+c_{m}(\mathbf{r}^{*})};\mathbf {S}'_{m}\geq\frac{m\tilde{\mu}}{1+\eta} \biggr] \\ =& \tilde{E}_{m,\mathbf{r}^{*}} \biggl[ e^{-\langle\mathbf{r}^{*},\mathbf{S}'_{m}\rangle+c_{m}(\mathbf{r}^{*})}; \frac{\mathbf{S}'_{m}}{m}- \tilde{\boldsymbol{\mu}}\geq-\frac{\eta \tilde{\boldsymbol{\mu}}}{1+\eta} \biggr] \\ \geq& \tilde{E}_{m,\mathbf{r}^{*}} \biggl[ e^{-\langle\mathbf{r}^{*},\mathbf{S}'_{m}\rangle+c_{m}(\mathbf{r}^{*})}; \biggl\vert \frac{\mathbf{S}'_{m}}{m}-\tilde{\boldsymbol{\mu}}\biggr\vert \leq \frac{\eta|\tilde{\boldsymbol{\mu}}|}{1+\eta} \biggr] \\ \geq& \exp \biggl\{ - \bigl\langle \mathbf{r}^{*},m\tilde{\boldsymbol{\mu}}\bigr\rangle \frac{1+2\eta}{1+\eta} +c_{m}\bigl(\mathbf{r}^{*}\bigr) \biggr\} \tilde{P}_{m,\mathbf{r}^{*}} \biggl( \biggl\vert \frac{\mathbf{S}'_{m}}{m}-\tilde{ \boldsymbol{\mu}}\biggr\vert \leq \frac{\eta \vert \tilde{\boldsymbol{\mu}} \vert }{1+\eta} \biggr). \end{aligned}$$

Here, \(\tilde{P}_{m,\mathbf{r}^{*}}(\cdot)\) goes to 1 by Lemma A.1, and since \(m\tilde{\boldsymbol{\mu}}\geq(1+\eta)\mathbf{u}\), we have

$$ \log\Psi_{\mathrm{max}}(\mathbf{u})\geq-(1+2\eta)\bigl\langle \mathbf{r}^{*}, \mathbf {u}\bigr\rangle , $$

letting \(\eta\downarrow0\), then \(\liminf_{u_{1}\rightarrow\infty ,u_{2}\rightarrow\infty} \{\Psi_{\mathrm{max}}(\mathbf{u})+\langle\mathbf{r}^{*},\mathbf{u}\rangle\}\geq0\).

For \(\limsup_{u_{1}\rightarrow\infty,u_{2}\rightarrow\infty} \{\log\Psi_{\mathrm{max}}(\mathbf{u})+\langle\mathbf{r}^{*},\mathbf{u}\rangle\}\leq 0\), we write

$$ \Psi_{\mathrm{max}}(\mathbf{u})=\sum_{n=1}^{\infty}{P \bigl(T_{\mathrm{max}}(\mathbf {u})=n \bigr)}=I_{1}+I_{2}+I_{3}+I_{4}, $$

where

$$\begin{aligned}& I_{1} = \sum_{n=1}^{n(\delta)} {P \bigl(T_{\mathrm{max}}(\mathbf{u})=n \bigr)}, \\& I_{2} = \sum_{n=n(\delta)+1}^{\lfloor(1-\delta)\nu\rfloor} {P \bigl(T_{\mathrm{max}}(\mathbf{u})=n \bigr)}, \\& I_{3} = \sum_{n=\lfloor(1-\delta)\nu\rfloor+1}^{\lfloor(1+\delta)\nu\rfloor} {P \bigl(T_{\mathrm{max}}(\mathbf{u})=n \bigr)}, \\& I_{4} = \sum_{n=\lfloor(1+\delta)\nu\rfloor+1}^{\infty} {P \bigl(T_{\mathrm{max}}(\mathbf{u})=n \bigr)}, \end{aligned}$$

\(\nu=\max\{u_{1}/\tilde{\mu}_{1},u_{2}/\tilde{\mu}_{2}\}\), and \(n(\delta)\) is chosen such that \({c_{n}(\mathbf{r}^{*})/n}<\min\{\delta,(-\log z)/2\}\) and

$$ \tilde{P}_{n,\mathbf{r}^{*}} \biggl( \biggl\vert \frac{\mathbf{S}'_{n}}{n}-\tilde{\mu} \biggr\vert > \frac{\delta|\tilde{\boldsymbol{\mu}}|}{1+\delta} \biggr) \leq z^{n}, \qquad \tilde{P}_{n,\mathbf{r}^{*}} \biggl( \biggl\vert \frac{\mathbf{S}'_{n-k}}{n}-\tilde{ \boldsymbol{\mu}} \biggr\vert > \frac{\delta|\tilde{\boldsymbol{\mu}}|}{1+\delta} \biggr) \leq z^{n} $$

for some \(z<1\), all \(n>n(\delta)\) and any given \(k>0\).

Since

$$\begin{aligned} P \bigl(T_{\mathrm{max}}(\mathbf{u})=n \bigr) =& P \bigl(\mathbf{S}'_{n} \geq\mathbf{u} \bigr)= \tilde{E}_{n,\mathbf{r}^{*}} \bigl[ e^{-\langle\mathbf{r}^{*},\mathbf{S}'_{n}\rangle+c_{n}(\mathbf{r}^{*})}; \mathbf{S}'_{n}\geq\mathbf{u} \bigr] \\ \leq& e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle+c_{n}(\mathbf{r}^{*})} \tilde{P}_{n,\mathbf{r}^{*}} \bigl(\mathbf{S}'_{n} \geq\mathbf{u}\bigr), \end{aligned}$$

we have

$$\begin{aligned}& I_{1} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{n=1}^{n(\delta)}{e^{c_{n}(\mathbf{r}^{*})}}; \\& I_{2} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{n=n(\delta)+1} ^{\lfloor(1-\delta)\nu\rfloor} {e^{c_{n}(\mathbf{r}^{*})} \tilde{P}_{n,\mathbf{r}^{*}} \bigl( \mathbf{S}'_{n}\geq\mathbf{u} \bigr)} \\& \hphantom{I_{2}} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{n=n(\delta)+1}^{\lfloor(1-\delta)\nu\rfloor} e^{-n/2\log z} \tilde{P}_{n,\mathbf{r}^{*}} \biggl( \biggl\vert \frac{\mathbf{S}'_{n}}{n}-\tilde{\mu}\biggr\vert > \frac{\delta|\tilde{\mu}|}{1+\delta} \biggr) \\& \hphantom{I_{2}} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{n=n(\delta)+1} ^{\lfloor(1-\delta)\nu\rfloor} \frac{z^{n}}{z^{n/2}} \\& \hphantom{I_{2}} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{n=0}^{\infty}z^{n/2} = \frac{e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle}}{1-z^{1/2}}; \\& I_{3} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{\lfloor(1-\delta)\nu\rfloor+1} ^{\lfloor(1+\delta)\nu\rfloor} e^{c_{n}(\mathbf{r}^{*})} \\& \hphantom{I_{3}} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{\lfloor(1-\delta)\nu\rfloor+1} ^{\lfloor(1+\delta)\nu\rfloor} e^{n(\delta)} \\& \hphantom{I_{3}} \leq e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \bigl(2\lfloor\delta\nu\rfloor+1 \bigr) e^{\delta(1+\delta)\nu}; \end{aligned}$$

and, finally, we can find k such that \(\mathbf{S}'_{n-k}\leq\mathbf {u}\), the last part is

$$\begin{aligned} I_{4} \leq& \sum_{\lfloor(1+\delta)\nu\rfloor}^{\infty} {P \bigl(\mathbf{S}'_{n-k}\leq\mathbf{u}, \mathbf{S}'_{n}\geq\mathbf {u} \bigr)} \\ =& \sum_{\lfloor(1+\delta)\nu\rfloor}^{\infty} \tilde{E}_{n,\mathbf{r}^{*}} \bigl[ e^{-\langle\mathbf{r}^{*},\mathbf{S}'_{n}\rangle+c_{n}(\mathbf{r}^{*})} ;\mathbf{S}'_{n-k}\leq\mathbf{u}, \mathbf{S}'_{n}\geq\mathbf{u} \bigr] \\ \leq& e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{\lfloor(1+\delta)\nu\rfloor}^{\infty}e^{c_{n}(\mathbf{r}^{*})} \tilde{P}_{n,\mathbf{r}^{*}} \biggl( \biggl\vert \frac{\mathbf{S}'_{n-k}}{n}-\tilde{\mu} \biggr\vert > \frac{\delta|\tilde{\mu}|}{1+\delta} \biggr) \\ \leq& e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \sum_{\lfloor(1+\delta)\nu\rfloor}^{\infty} \frac{z^{n}}{z^{n/2}} \\ \leq& \frac{e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle}}{1-z^{n/2}}. \end{aligned}$$

Thus, the upper bound for \(\Psi_{\mathrm{max}}(\mathbf{u})\) is

$$ e^{-\langle\mathbf{r}^{*},\mathbf{u}\rangle} \Biggl\{ \sum_{n=1}^{n(\delta)} {e^{c_{n}(\mathbf{r}^{*})}} + \frac{2}{1-z^{n/2}} + \bigl(2\lfloor\delta\nu\rfloor+1\bigr) e^{\delta(1+\delta)\nu} \Biggr\} . $$

Then, using (b) of Lemma 4.1, we get

$$ \limsup_{u_{1}\rightarrow\infty,u_{2}\rightarrow\infty}\bigl\{ \log\Psi _{\mathrm{max}}(\mathbf{u}) + \bigl\langle \mathbf{r}^{*},\mathbf{u}\bigr\rangle -\delta(1+\delta)\nu \bigr\} \leq0. $$

Letting \(\delta\downarrow0\), the result is proved.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ma, D., Wang, D. & Cheng, J. Bidimensional discrete-time risk models based on bivariate claim count time series. J Inequal Appl 2015, 105 (2015). https://doi.org/10.1186/s13660-015-0618-3

Download citation

Keywords

  • bidimensional discrete-time risk model
  • adjustment coefficient
  • bivariate Poisson \(\operatorname{MA}(1)\)
  • bivariate Poisson \(\operatorname{AR}(1)\)
  • large deviations
  • ruin probability
  • value-at-risk