Skip to content

Advertisement

  • Research
  • Open Access

Bidimensional discrete-time risk models based on bivariate claim count time series

Journal of Inequalities and Applications20152015:105

https://doi.org/10.1186/s13660-015-0618-3

  • Received: 10 October 2014
  • Accepted: 3 March 2015
  • Published:

Abstract

In this paper, we consider a class of bidimensional discrete-time risk models, which are based on the assumptions that the claim counts obey some specific bivariate integer-valued time series such as bivariate Poisson MA (BPMA) and the bivariate Poisson AR (BPAR) processes. We derive the moment generating functions (m.g.f.’s) for these processes, and we present their explicit expressions for the adjustment coefficient functions. The asymptotic approximations (upper bounds) to three different types of ruin probabilities are discussed, and the marginal value-at-risk (VaR) for each model is obtained. Numerical examples are provided to compute the adjustment coefficients discussed in the paper.

Keywords

  • bidimensional discrete-time risk model
  • adjustment coefficient
  • bivariate Poisson \(\operatorname{MA}(1)\)
  • bivariate Poisson \(\operatorname{AR}(1)\)
  • large deviations
  • ruin probability
  • value-at-risk

1 Introduction

Bidimensional risk theory has gained a lot of attention in the last two decades due to its complexity and various uses in different fields. Chan et al. [1] studied three types of ruin probabilities with phase-type distributions. Yuen et al. [2] introduced the bivariate compound binomial model to approximate the finite-time survival probability of the bivariate compound Poisson model with common shock. Li et al. [3] studied the ruin probabilities of a bidimensional perturbed insurance risk model, and they obtained the upper bound for the infinite-time ruin probability by using the martingale technique. Avram et al. [4] studied the joint ruin problem for two insurance companies that divide between claims and premia in some specified proportions. Badescu et al. [5] extended the bidimesional risk models proposed by Avram et al. and derived the Laplace transform of the time until at least one insurer is ruined. For other important works on bidimensional risk models, see Cai and Li [6], Dang et al. [7], Chen et al. [8], etc.

The univariate integer-valued time series models, such as the integer-valued moving average (INMA), the integer-valued autoregressive (INAR), and the integer-valued moving average autoregressive (INARMA) models, are based on appropriate thinning operations. This category of models has been proposed by many authors: see, e.g., Al-Osh and Alzaid [9, 10] and McKenzie [11, 12]. Quoreshi [13] proposed a bivariate integer-valued moving average (BINMA) model which is applied to examining the correlation between the stock transaction series. Pedeli and Karlis [14] introduced a bivariate integer-valued \(\operatorname{AR}(1)\) (\(\operatorname{BINAR}(1)\)) model with the innovations of bivariate Poisson (BP) and bivariate negative binomial (BNB) distributions.

Considering the dependent relationship of the claim counts among different periods, the univariate integer-valued time series has been applied to describe it, Cossétte et al. [15, 16] applied the Poisson \(\operatorname{MA}(1)\) and Poisson \(\operatorname{AR}(1)\) processes to discrete-time risk models.

Let us consider an example in car insurance policies. They usually contain at least two responsibilities: the third-party insurance and CDW coverage. If we regard the claim counts of each one responsibility of the policies as integer-valued time series, and they are correlated, then the whole claim counts should be a bivariate integer-valued time series, and the models proposed by Pedeli and Karlis [14] meet this situation perfectly. In this paper, we extend their risk models to bidimensional contexts, and we study the bidimensional risk models based on the bivariate claim counts obeying bivariate integer-valued time series.

The paper is structured as follows. In the next section, we propose a class of general bidimensional risk models based on bivariate time series for the bivariate claim counts r.v.’s. In Section 3, we present the risk models based on the bivariate claim counts obeying bivariate Poisson \(\operatorname{MA}(1)\) (\(\operatorname{BPMA}(1)\)) and the bivariate Poisson \(\operatorname{AR}(1)\) (\(\operatorname{BPAR}(1)\)) process generated by binomial thinning operations. For each model, we examine its properties and derive the expressions for adjustment coefficient and its compound distributions. In Section 4, we present the asymptotic approximations to the three different types of ruin probabilities by large deviation theorems for our models. A numerical example is provided to show the adjustment coefficients and the marginal VaR values in Section 5. Besides, the detailed proofs of the important results are presented in the Appendix.

2 Bidimensional discrete-time risk models

In this section, we consider the bidimensional risk model as follows. Let \((R_{1n},R_{2n})\) be the bidimensional discrete-time surplus process
$$ \left ( \begin{array}{@{}c@{}} R_{1n} \\ R_{2n} \end{array} \right ) =\left ( \begin{array}{@{}c@{}} u_{1} \\ u_{2} \end{array} \right ) +n\left ( \begin{array}{@{}c@{}} \pi_{1} \\ \pi_{2} \end{array} \right )- \sum_{i=1}^{n} \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{N_{1i}}{X_{ij}} \\ \sum_{k=1}^{N_{2i}}{Y_{ik}} \end{array} \right ), $$
(1)
where
  • \(u_{1}\) and \(u_{2}\) are the positive initial reserves of the first and second business, respectively;

  • \(\pi_{1}\) and \(\pi_{2}\) are the premia rates of the first and second business, respectively;

  • \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\) are the bivariate claim counts of the two businesses in ith period;

  • \(\{X_{ij}, i, j= 1, 2, \ldots\}\) are a sequence of i.i.d. claim size r.v.’s of the first business;

  • \(\{Y_{ik}, i, k= 1, 2, \ldots\}\) are a sequence of i.i.d. claim size r.v.’s of the second business;

  • \(\{X_{ij}, i, j= 1, 2, \ldots\}\) and \(\{Y_{ik}, i, k= 1, 2, \ldots \}\) are mutually independent.

In this paper, we adhere our concentrations to the light-tailed distributions. \(X_{ij}\) (\(Y_{ik}\)) are the copies of r.v. X (Y) whose distribution function (d.f.) is \(F(x)\), \(x> 0\) (\(G(y)\), \(y> 0\)), with mean \(\mu_{1}\) (\(\mu_{2}\)) and the m.g.f. \(m_{X}(t)\) (\(m_{Y}(s)\)).

Let \((N_{(1n)},N_{(2n)})=\sum_{i=1}^{n}(N_{1i},N_{2i})\) be the aggregate bivariate claim counts of n periods; \((W_{1i},W_{2i})=(\sum_{i=1}^{N_{1i}}X_{ij},\sum_{i=1}^{N_{1i}}Y_{ij})\) be the bivariate aggregate claims of the two businesses in ith period, \(i=1,2,\ldots\) ; and \((S_{1n},S_{2n})=\sum_{i=1}^{n}(W_{1i},W_{2i})\) be the aggregate bivariate claims for the two businesses. We can denote them with vector notation \(\mathbf{N}_{(n)}\), \(\mathbf{W}_{i}\), and \(\mathbf{S}_{n}\), respectively.

There are three types of ruin probabilities defined through different times of ruin:
  • \(T_{\mathrm{max}}=\inf\{n\mid\max\{R_{1n}, R_{2n}\}\leq0, n\in N^{+}\}\);

  • \(T_{\mathrm{sum}}=\inf\{n\mid R_{1n}+ R_{2n}\leq0, n\in N^{+}\}\);

  • \(T_{\mathrm{min}}=\inf\{n\mid\min\{R_{1n}, R_{2n}\}\leq0, n\in N^{+}\}\).

The corresponding ruin probabilities are denoted by
  • \(\Psi_{\mathrm{max}}(u_{1}, u_{2})\)=\(P\{T_{\mathrm{max}}< \infty\mid(R_{10}, R_{20})=(u_{1}, u_{2})\}\);

  • \(\Psi_{\mathrm{sum}}(u_{1}, u_{2})\)=\(P\{T_{\mathrm{sum}}< \infty\mid(R_{10}, R_{20})=(u_{1}, u_{2})\}\);

  • \(\Psi_{\mathrm{min}}(u_{1}, u_{2})\)=\(P\{T_{\mathrm{min}}< \infty\mid(R_{10}, R_{20})=(u_{1}, u_{2})\}\).

3 Claim count time series based on -thinning operation

In this section, we firstly introduce the bivariate Poisson distribution and the bivariate binomial thinning operator, ‘’. Let \(M_{1}\), \(M_{2}\), and M be three mutually independent Poisson r.v.’s with their corresponding parameters \(\lambda_{1}>0\), \(\lambda_{2}>0\), and \(\lambda>0\), according to Marshall and Olkin [17], \((U,V)=(M_{1}+M,M_{2}+M)\) (M is called the common shock r.v. in insurance field), obeys the bivariate Poisson (BP) distribution, whose probability mass function is
$$ P(U=n_{1},V=n_{2})=\sum _{n=0}^{n_{1}\wedge n_{2}} \frac{\lambda_{1}^{n_{1}-n}\lambda_{2}^{n_{2}-n}\lambda^{n}}{(n_{1}-n)!(n_{2}-n)!n!} \exp\{- \lambda_{1}-\lambda_{2}-\lambda\}, $$
(2)
where \(n_{1}\wedge n_{2} = \min(n_{1}, n_{2})\), and \(E[U]=\operatorname{Var}[U]=\lambda _{1}+\lambda\), \(E[V]=\operatorname{Var}[V]=\lambda_{1}+\lambda\), and \(\operatorname{Cov}[U,V]=\lambda\), we write it \(\operatorname{BP}(\lambda_{1},\lambda_{2},\lambda)\). Its probability generating function (p.g.f.) is
$$ \hat{P}(t,s)= \exp\bigl\{ \lambda_{1}(t-1)+ \lambda_{2}(s-1)+\lambda(ts-1)\bigr\} . $$
(3)
For the BP r.v.’s, Pedeli and Karlis [14] derived the bivariate binomial thinning operators referring simply to the univariate binomial thinning mechanism. Let the binomial thinning operator \(\alpha_{1}\) and \(\alpha_{2}\) (\(\alpha_{1},\alpha_{2}\in[0,1]\)) act on U and V, respectively, we can write it as
$$ \left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0& \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} U \\ V \end{array} \right )= \left ( \begin{array}{@{}c@{}} \sum_{i=1}^{U}\delta_{i}^{(1)} \\ \sum_{j=1}^{V}\delta_{j}^{(2)} \end{array} \right ), $$
(4)
where \(\{\delta_{i}^{(1)}, i= 1, 2, \ldots\}\), and \(\{\delta_{j}^{(2)}, j= 1, 2, \ldots\}\), are two mutually independent sequences of i.i.d. Bernoulli r.v.’s with mean \(\alpha_{1}\) and \(\alpha_{2}\), respectively. Furthermore, their joint p.g.f. is
$$\begin{aligned} \hat{P}(t,s) =& \exp \bigl\{ \lambda_{1}[ \alpha_{1}t+\bar{\alpha}_{1}-1] \bigr\} \times\exp \bigl\{ \lambda_{2}[\alpha_{2}s+\bar{\alpha}_{2}-1] \bigr\} \\ &{}\times \exp \bigl\{ \lambda\bigl[(\alpha_{1}t+\bar{ \alpha}_{1}) (\alpha_{2}s+\bar{\alpha} _{2})-1 \bigr] \bigr\} , \end{aligned}$$
(5)
where \(\bar{\alpha}_{1}=1-\alpha_{1}\) and \(\bar{\alpha}_{2}=1-\alpha_{2}\).

3.1 Risk model for \(\operatorname{BPMA}(1)\)

3.1.1 Definition and properties

Let us consider a \(\operatorname{BPMA}(1)\) process for \(\{(N_{1i},N_{2i}), i=1,2,\ldots\} \), its dynamics is defined as follows:
$$ \left ( \begin{array}{@{}c@{}} N_{1i} \\ N_{2i} \end{array} \right ) = \left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0 & \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} \varepsilon_{1,i-1} \\ \varepsilon_{2,i-1} \end{array} \right ) + \left ( \begin{array}{@{}c@{}} \varepsilon_{1i} \\ \varepsilon_{2i} \end{array} \right ),\quad i= 1,2,\ldots, $$
(6)
where \(\{(\varepsilon_{1i}, \varepsilon_{2i}), i=0,1,\ldots\}\) are a sequence of i.i.d. \(\operatorname{BP}(\lambda_{1},\lambda_{2},\lambda)\) r.v.’s. For discrimination, we have
$$ \alpha_{1}\circ\varepsilon_{1i}=\sum _{j=1}^{\varepsilon_{1i}}\delta _{i+1,i,j}^{(1)}, \qquad \alpha_{2}\circ\varepsilon_{2i}=\sum _{j=1}^{\varepsilon_{2i}}\delta _{i+1,i,j}^{(2)}, \quad i=1, 2, \ldots, $$
(7)
where \(\{\delta_{i,i-1,j}^{(1)},i,j=1, 2, \ldots\}\) and \(\{\delta _{i,i-1,j}^{(2)},i,j=1, 2, \ldots\}\) are two independent sequences of i.i.d. Bernoulli r.v.’s with means \(\alpha_{1}\) and \(\alpha_{2}\), respectively.
From (4) and (6), we have
$$\begin{aligned}& \left ( \begin{array}{@{}c@{}} N_{11} \\ N_{21} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \alpha_{1}\circ\varepsilon_{10}+\varepsilon_{11} \\ \alpha_{2}\circ\varepsilon_{20}+\varepsilon_{21} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{\varepsilon_{10}}\delta_{10j}^{(1)}+\varepsilon_{11} \\ \sum_{j=1}^{\varepsilon_{20}}\delta_{10j}^{(2)}+\varepsilon_{21} \end{array} \right ), \\& \left ( \begin{array}{@{}c@{}}N_{1k} \\ N_{2k} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \alpha_{1}\circ\varepsilon_{1,k-1}+\varepsilon_{1k} \\ \alpha_{2}\circ\varepsilon_{2,k-1}+\varepsilon_{2k} \end{array} \right ) =\left ( \begin{array}{@{}c@{}} \sum_{j=1}^{\varepsilon_{1k}}\delta_{k,k-1,j}^{(1)}+\varepsilon_{1k} \\ \sum_{j=1}^{\varepsilon_{2k}}\delta_{k,k-1,j}^{(2)}+\varepsilon_{2k} \end{array} \right ) \end{aligned}$$
for \(k=2,3\ldots\) .
As stated in Al-Osh and Alzaid [9, 10], the marginal distributions of \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\) are uniquely determined by the distributions of \(\{(\varepsilon_{1i},\varepsilon _{2i}),i=0,1,\ldots\}\), hence, they have identical bivariate Poisson margins and
$$\begin{aligned}& E[N_{1i}] = \operatorname{Var}[N_{1i}]=E[ \alpha_{1}\circ\varepsilon _{1,i-1}+\varepsilon_{1i}]=(1+ \alpha_{1}) (\lambda_{1}+\lambda), \\& E[N_{2i}] = \operatorname{Var}[N_{2i}]=E[ \alpha_{2}\circ\varepsilon _{2,i-1}+\varepsilon_{2i}]=(1+ \alpha_{2}) (\lambda_{2}+\lambda). \end{aligned}$$
The covariances are listed as follows:
$$\begin{aligned}& \operatorname{Cov}[N_{1i}, N_{2i}]=(1+ \alpha_{1}\alpha_{2})\lambda,\quad i=1,2\ldots ; \\& \operatorname{Cov}[N_{ki}, N_{k,i+h}] = \left \{ \begin{array}{l@{\quad}l} \alpha_{k}(\lambda_{1}+\lambda), &\text{if }h=1; \\ 0, &\text{if }h>1; \end{array} \right .\quad k=1, 2; \\& \operatorname{Cov}[N_{ki}, N_{j,i+h}]= \left \{ \begin{array}{l@{\quad}l} \alpha_{j}\lambda, &\text{if }h=1; \\ 0, &\text{if }h>1; \end{array} \right .\quad k\neq j=1, 2. \end{aligned}$$

3.1.2 Expression for adjustment coefficient function

Generally speaking, adjustment coefficients are regarded as the safety indices of the surplus processes, they are the positive zero-roots of the adjustment coefficient functions. In classical unidimensional Lundberg-type risk models, most of which assumed that the surplus processes are Lévy processes, the adjustment coefficient functions are obtained via martingale techniques: the cumulate generating functions (c.g.f.’s) of the net loss processes. However, in our risk models, the whole surplus processes are not Lévy processes anymore. According to Nyrhinen [18] and Müller and Pflug [19], there also exist adjustment coefficient functions for the unidimensional non-Lévy contexts using another approach: let \(c_{n}(t)\) be the c.g.f. of the aggregate net loss process (aggregate claims minus aggregate premia incomes) at time n, the adjustment coefficient function is given by \(c(t)=\lim_{n\rightarrow\infty}\frac {1}{n}c_{n}(t)\). In this subsection, we derive the joint c.g.f., which is denoted by \(c_{n}(t,s)\), of the aggregate net losses process based on model \(\operatorname{BPMA}(1)\). Analogously to Cossétte et al. [15], the adjustment coefficient function \(c(t,s)\) is given by \(c(t,s)=\lim_{n\rightarrow\infty}\frac{1}{n}c_{n}(t,s)\).

Proposition 3.1

The expression for \(c(t,s)\) for model \(\operatorname{BPMA}(1)\) is given by
$$\begin{aligned} c(t,s) =& \lambda_{1}\bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)-1\bigr) + \lambda_{2} \bigl(\alpha_{2}m_{Y}^{2}(s)+\bar{ \alpha}_{2}m_{Y}(t)-1\bigr) \\ &{}+ \lambda\bigl[\bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(\alpha_{2}m_{Y}^{2}(s) +\bar{\alpha}_{2}m_{Y}(t)\bigr)-1\bigr] -\pi_{1}t-\pi_{2}s. \end{aligned}$$
(8)

Proof

See the Appendix. □

Remark 3.1

Referring to the last item of (25), we have
$$\begin{aligned}& \lambda\bigl[ (n-1) \bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(\alpha_{2}m_{Y}^{2}(s)+ \bar{\alpha} _{2}m_{Y}(s)\bigr) \\& \quad {}+ \bigl(\alpha_{1}m_{X}(t)+\bar{ \alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}(s)+\bar{ \alpha}_{2}\bigr)+m_{X}(t)m_{Y}(s)-(n+1)\bigr]. \end{aligned}$$
(9)
It is very necessary to explain this item. As mentioned for the assumptions of (2), (4), and (6), we can decompose \((\varepsilon_{1i},\varepsilon_{2i})\) into \((\varepsilon _{1i}'+\varepsilon_{i}',\varepsilon_{2i}'+\varepsilon_{i}')\) for every \(i=0,1,2,\ldots\) , where \(\varepsilon_{1i}'\), \(\varepsilon_{2i}'\), and \(\varepsilon_{i}'\) are three mutually independent Poisson r.v.’s with their corresponding parameters \(\lambda_{1}\), \(\lambda_{2}\), and λ, and \(\varepsilon_{i}'\) is called a common shock r.v. Thus there exists a sub-\(\operatorname{BPMA}(1)\) process embedding in \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\), we denote \(\{(N_{i}^{\prime(1)},N_{i}^{\prime(2)}),i=1,2,\ldots\}\) and its expression is
$$ \left ( \begin{array}{@{}c@{}} N_{i}^{\prime(1)} \\ N_{i}^{\prime(2)} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \alpha_{1}\circ\varepsilon_{i-1}'+\varepsilon_{i}' \\ \alpha_{2}\circ\varepsilon_{i-1}'+\varepsilon_{i}' \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{\varepsilon_{i-1}'}\delta_{i,i-1,j}^{(1)}+\varepsilon_{i}' \\ \sum_{j=1}^{\varepsilon_{i-1}'}\delta_{i,i-1,j}^{(2)}+\varepsilon_{i}' \end{array} \right ), $$
here \(\{(\delta_{i,i-1,j}^{(1)},\delta_{i,i-1,j}^{(2)}),i=0,1,2,\ldots\} \) are also expressed as a sequence of i.i.d. bivariate Bernoulli r.v.’s (see Marshall and Olkin [17]) by some authors. Since in our assumptions, \(X_{ij}\) and \(Y_{ij}\) are mutually independent, the assumption of independence between \(\delta_{i,i-1,j}^{(1)}\) and \(\delta _{i,i-1,j}^{(2)}\) cannot cause a contradiction. Thus, (9) is the joint m.g.f. of the compound sub-\(\operatorname{BPMA}(1)\) process.

Proposition 3.2

\((S_{1n},S_{2n})\) follows a bivariate compound Poisson distribution. That means we can express it as
$$ S_{1n} = \left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(1n)}}C_{1j}^{(n)},&N_{(1n)}>0, \\ 0,&N_{(1n)}=0, \end{array} \right .\qquad S_{2n} = \left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(2n)}}C_{2j}^{(n)},&N_{(2n)}>0, \\ 0,&N_{(2n)}=0, \end{array} \right . $$
where \(N_{(1n)}\) and \(N_{(2n)}\) are of marginal Poisson distributions with the parameter \((n+\alpha_{1})(\lambda_{1}+\lambda)\) and \((n+\alpha _{2})(\lambda_{2}+\lambda)\), respectively; \(\{C_{1j}^{(n)},j=1,2,\ldots\}\) and \(\{C_{2j}^{(n)},j=1,2,\ldots\}\) are two mutually independent sequences of i.i.d. r.v.’s with the mixed convolutional d.f.’s as
$$\begin{aligned}& F_{C_{1j}^{(n)}}(x) = \frac{1}{n+\alpha_{1}} \bigl[(n-1)\alpha _{1}F^{*2}(x)+ \bigl(1+\alpha_{1}+(n-1)\bar{\alpha}_{1} \bigr)F(x) \bigr], \\& G_{C_{2j}^{(n)}}(y) = \frac{1}{n+\alpha_{2}} \bigl[(n-1)\alpha _{2}G^{*2}(y)+ \bigl(1+\alpha_{2}+(n-1)\bar{\alpha}_{2} \bigr)G(y) \bigr], \end{aligned}$$
where \(F^{*2}(x)\) and \(G^{*2}(y)\) are 2-fold convolutions.
If \(n\rightarrow\infty\), then \((N_{(1n)},N_{(2n)})\) asymptotically obeys \(\operatorname{BP}(n\lambda_{1},n\lambda_{2},n\lambda)\). Furthermore,
$$ F_{C_{1j}^{(n)}}(x)\stackrel{d}{\rightarrow}\alpha_{1}F^{*2}(x)+ \bar{\alpha} _{1}F(x),\qquad F_{C_{2j}^{(n)}}(y)\stackrel{d}{ \rightarrow}\alpha_{2}G^{*2}(y)+\bar{\alpha}_{2}G(y). $$

Proof

Referring to (25) and Remark 3.1, we easily get the conclusion. □

3.2 Risk model for \(\operatorname{BPAR}(1)\)

3.2.1 Definition and properties

We consider another bivariate time series model for count data to decide the relationship of the bivariate claim counts among the different periods. Suppose that the claim count process \(\{ (N_{1i},N_{2i}), i=1,2,\ldots\}\) is a bivariate Poisson \(\operatorname{AR}(1)\) (\(\operatorname{BPAR}(1)\)) process, whose autoregressive dynamics is given by
$$ \left ( \begin{array}{@{}c@{}} N_{1i} \\ N_{2i} \end{array} \right )= \left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0 & \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} N_{1,i-1} \\ N_{2,i-1} \end{array} \right ) + \left ( \begin{array}{@{}c@{}} \varepsilon_{1i} \\ \varepsilon_{2i} \end{array} \right ),\quad i= 1,2,\ldots, $$
(10)
where \(\{(\varepsilon_{1i},\varepsilon_{2i}), i=0, 1, \ldots\}\) are a sequence of i.i.d. \(\operatorname{BP}(\lambda_{1}, \lambda_{2}, \lambda)\) r.v.’s. If and only if \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\) is a stationary process, \(\alpha_{1}, \alpha_{2}\in[0,1)\). For convenience, we suppose \((N_{10},N_{20})\) to be the initial r.v. of the \(\operatorname{BPAR}(1)\) process and to be a copy of \((\varepsilon_{10},\varepsilon_{20})\). Similarly to \(\operatorname{BPMA}(1)\), the dependence structure of \(\operatorname{BPAR}(1)\) process can be unfolded as
$$ \left ( \begin{array}{@{}c@{}} N_{11} \\ N_{21} \end{array} \right ) =\left ( \begin{array}{@{}c@{}} \sum_{j=1}^{N_{10}}{\delta_{10j}^{(1)}}+\varepsilon_{11} \\ \sum_{j=1}^{N_{20}}{\delta_{10j}^{(2)}}+\varepsilon_{21} \end{array} \right ) $$
and
$$ \left ( \begin{array}{@{}c@{}} N_{1k} \\ N_{2k} \end{array} \right ) = \left ( \begin{array}{@{}c@{}} \sum_{j=1}^{N_{10}}{\prod_{h=1}^{k}{\delta_{h0j}^{(1)}}} + \sum_{i=1}^{k-1}\sum_{j=1}^{\varepsilon_{1i}}\prod_{h=i+1}^{k}{\delta _{hij}^{(1)}} + \varepsilon_{1k} \\ \sum_{j=1}^{N_{20}}{\prod_{h=1}^{k}{\delta_{h0j}^{(2)}}} + \sum_{i=1}^{k-1}\sum_{j=1}^{\varepsilon_{2i}}\prod_{h=i+1}^{k}{\delta _{hij}^{(2)}} + \varepsilon_{2k} \end{array} \right ) $$
for \(k=2,3\ldots\) .
As mentioned in previous subsection, \(\{\delta_{hij}^{(1)}, h>i=0,1,2,\ldots,j=1,2,\ldots\}\) and \(\{\delta_{hij}^{(2)}, h>i=0,1,2,\ldots,j=1,2,\ldots\}\) are two independent sequences of i.i.d. Bernoulli r.v.’s with mean \(\alpha_{1}\) and \(\alpha_{2}\), respectively. We can get the expectations and covariances of the \(\operatorname{BPAR}(1)\) process, for \(k,j=1,2\) ; \(i=1,2,\ldots\) , and \(h=0,1,2,\ldots\) ,
$$\begin{aligned}& E[N_{ki}]=\operatorname{Var}[N_{ki}] = \frac{\lambda_{k}+\lambda}{1-\alpha_{k}}, \qquad \operatorname{Cov}[N_{ki},N_{k,i+h}] = \alpha_{k}^{h}\frac{\lambda_{k}+\lambda}{1-\alpha_{k}}, \\& \operatorname{Corr}[N_{ki},N_{k,i+h}] = \alpha_{i}^{h}, \qquad \operatorname{Cov}[N_{ki},N_{j,i+h}] = \alpha_{j}^{h}\frac{\lambda}{1-\alpha_{1}\alpha_{2}}, \\& \operatorname{Corr}[N_{ki},N_{j,i+h}] =\alpha_{j}^{h} \frac{\lambda\sqrt{(1-\alpha_{1})(1-\alpha_{2})}}{ (1-\alpha_{1}\alpha_{2})\sqrt{(\lambda_{1}+\lambda)(\lambda_{2}+\lambda)}},\quad k\neq j=1,2. \end{aligned}$$

3.2.2 Expression for adjustment coefficient function

Proposition 3.3

Assuming that \(\alpha_{1},\alpha_{2}\in[0,1)\) and \(\alpha_{1}m_{X}(t)<1\), \(\alpha_{2}m_{Y}(s)<1\), then the expression for \(c(t,s)\) is given by
$$\begin{aligned} c(t,s) =& \lambda_{1} \biggl[\frac{\bar{\alpha}_{1}m_{X}(t)}{1-\alpha_{1}m_{X}(t)}-1 \biggr]+ \lambda_{2} \biggl[\frac{\bar{\alpha}_{2}m_{Y}(s)}{1-\alpha_{2}m_{Y}(s)}-1 \biggr] \\ &{}+ \lambda \biggl\{ \frac{\bar{\alpha}_{1}\bar{\alpha}_{2}m_{X}(t)m_{Y}(s)}{1-\alpha_{1}\alpha_{2}m_{X}(t)m_{Y}(s)} \biggl[ \frac{\alpha_{1}m_{X}(t)}{1-\alpha_{1}m_{X}(t)}+ \frac{\alpha_{2}m_{Y}(s)}{1-\alpha _{2}m_{Y}(s)}+1 \biggr]-1 \biggr\} \\ &{}-\pi_{1}t-\pi_{2}s; \end{aligned}$$
(11)
and for the special situation if \(\alpha_{1}=0\), \(\alpha_{2}>0\), and \(\alpha _{2}m_{Y}(s)<1\) still holds, we have
$$\begin{aligned} c(t,s) =& \lambda_{1} \bigl(m_{X}(t)-1 \bigr) + \lambda_{2} \biggl[\frac{\bar{\alpha}_{2}m_{Y}(s)}{1-\alpha_{2}m_{Y}(s)}-1 \biggr] \\ &{}+ \lambda \biggl[\frac{\bar{\alpha}_{2}m_{X}(t)m_{Y}(s)}{1-\alpha_{2}m_{Y}(s)}-1 \biggr] -\pi_{1}t- \pi_{2}s; \end{aligned}$$
(12)
symmetrically, if \(\alpha_{2}=0\), \(\alpha_{1}>0\), and \(\alpha_{1}m_{X}(s)<1\) still holds, then
$$\begin{aligned} c(t,s) =& \lambda_{1} \biggl[\frac{\bar{\alpha}_{1}m_{X}(t)}{1-\alpha_{1}m_{X}(t)}-1 \biggr] + \lambda_{2} \bigl(m_{Y}(s)-1 \bigr) \\ &{}+ \lambda \biggl[\frac{\bar{\alpha}_{1}m_{X}(t)m_{Y}(s)}{1-\alpha_{1}m_{X}(t)}-1 \biggr] -\pi_{1}t- \pi_{2}s. \end{aligned}$$
(13)

Proof

See the Appendix. □

Remark 3.2

Equally referring to the last item of (35), we focus on
$$ \lambda\Biggl[ A_{n}\bigl\{ m_{X}(t)\bigr\} B_{n}\bigl\{ m_{Y}(s)\bigr\} +m_{X}(t)m_{Y}(s) \sum_{i=0}^{n-1}{A_{i} \bigl\{ m_{X}(t)\bigr\} B_{i}\bigl\{ m_{Y}(s)\bigr\} }-(n+1)\Biggr]. $$
(14)
Referring to Remark 3.1, there exists a sub-\(\operatorname{BPAR}(1)\) process embedding in \(\{(N_{1i},N_{2i}),i=1,2,\ldots\}\), we denote it \(\{ (N_{i}^{\prime(1)},N_{i}^{\prime(2)}),i=1,2,\ldots\}\) with the dynamical formula
$$ \left ( \begin{array}{@{}c@{}} N_{i}^{\prime(1)} \\ N_{i}^{\prime(2)} \end{array} \right ) =\left ( \begin{array}{@{}c@{\quad}c@{}} \alpha_{1} & 0 \\ 0 & \alpha_{2} \end{array} \right ) \circ \left ( \begin{array}{@{}c@{}} N_{i-1}^{\prime(1)} \\ N_{i-1}^{\prime(2)} \end{array} \right ) + \left ( \begin{array}{@{}c@{}} \varepsilon_{i}' \\ \varepsilon_{i}' \end{array} \right ) $$
for \(i=1,2,\ldots,n\); and \(\{\varepsilon_{i}', i=1,2,\ldots\}\) is a sequence of mutually independent r.v.’s with d.f.’s \(\operatorname{Po}(\lambda)\), \(N_{0}^{\prime(1)}=N_{0}^{\prime(2)}\) is a copy of \(\varepsilon_{1}'\). Equation (14) is the joint m.g.f. of \(\sum_{i=1}^{n} (\sum_{j=1}^{N_{i}^{\prime(1)}}X_{i,j}, \sum_{j=1}^{N_{i}^{\prime(2)}}Y_{i,j} )\).

Given (34) and Remark 3.2, we have the following conclusion.

Proposition 3.4

For \(0<\alpha_{1},\alpha_{2}<1\), \((S_{1n},S_{2n})\) follows the bivariate compound Poisson distribution as
$$ S_{1n}=\left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(1n)}}C_{1j}^{(n)},&N_{(1n)}>0, \\ 0,&N_{(1n)}=0, \end{array} \right .\qquad S_{2n}=\left \{ \begin{array}{l@{\quad}l} \sum_{j=1}^{N_{(2n)}}C_{2j}^{(n)},&N_{(2n)}>0, \\ 0,&N_{(2n)}=0, \end{array} \right . $$
where \(N_{(1n)}\) and \(N_{(2n)}\) are of marginal Poisson distributions with the parameters \((n+\alpha_{1})(\lambda_{1}+\lambda)\) and \((n+\alpha _{2})(\lambda_{2}+\lambda)\), respectively; and \(\{C_{1j}^{(n)},j=1,2,\ldots \}\) and \(\{C_{2j}^{(n)},j=1,2,\ldots\}\) are two independent sequences of i.i.d. r.v.’s with the mixed convolutional d.f.’s as
$$\begin{aligned}& F_{C_{1j}^{(n)}}(x) = \frac{1}{n+\alpha_{1}}\Biggl\{ (n\bar{\alpha}_{1}+ \bar{\alpha}_{1}\alpha_{1})F(x) \\& \hphantom{F_{C_{1j}^{(n)}}(x) =}{} + \sum_{i=2}^{n-1}\bigl[ \alpha_{1}+\alpha_{1}^{i}+(n-i)\bar{ \alpha}_{1}\alpha_{1}^{i}\bigr]F^{*i}(x) +\bigl(\alpha_{1}^{n-1}+\alpha_{1}^{n} \bigr)F^{*n}(x)\Biggr\} , \\& G_{C_{2j}^{(n)}}(y) = \frac{1}{n+\alpha_{2}}\Biggl\{ (n\bar{\alpha}_{2}+ \bar{\alpha}_{2}\alpha_{2})G(y) \\& \hphantom{G_{C_{2j}^{(n)}}(y) =}{} + \sum_{i=2}^{n-1}\bigl[ \alpha_{2}+\alpha_{2}^{i}+(n-i)\bar{ \alpha}_{2}\alpha_{2}^{i}\bigr]G^{*i}(y) + \bigl(\alpha_{2}^{n-1}+\alpha_{2}^{n} \bigr)G^{*n}(y)\Biggr\} . \end{aligned}$$
If \(n\rightarrow\infty\), then \((N_{(1n)},N_{(2n)})\) asymptotically obeys \(\operatorname{BP}(n\lambda_{1},n\lambda_{2},n\lambda)\), and furthermore,
$$ F_{C_{1j}^{(n)}}(x)\stackrel{d}{\rightarrow}\bar{\alpha}_{1}\sum _{i=1}^{\infty}\alpha_{1}^{i-1}F^{*i}(x), \qquad G_{C_{2j}^{(n)}}(y)\stackrel{d}{\rightarrow}\bar{\alpha}_{2} \sum_{i=1}^{\infty}\alpha_{2}^{i-1}G^{*i}(y). $$

4 Approximations to ruin probabilities

In this section, we mainly discuss the approximations to \(\Psi _{\mathrm{max}}(u_{1},u_{2})\) and \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\) for the two different models mentioned above.

Define \(t^{0}:=\sup\{t; m_{X}(t)<\infty\}\), \(s^{0}:=\sup\{s; m_{Y}(s)<\infty\}\), and \(G:=\{(t,s); c(t,s)<\infty, (t,s)>(0,0)\}\). Recalling the expressions of \(c(t,s)\) of the two different models we mentioned and the assumptions that \(X_{ij}\) (\(Y_{ij}\)) are of i.i.d. light-tailed distributions, it is clear that \(t^{0}>0\) and \(s^{0}>0\), and G is nonempty.

Lemma 4.1

As to the adjustment coefficient functions for \(\operatorname{BPMA}(1)\) and \(\operatorname{BPAR}(1)\) cases, the following statements hold.
  1. (a)

    The equation \(c(t,s)=0\) has at least one root in G.

     
  2. (b)

    For given \(l\geq0\), the equation \(c(t,lt)=0\) has only one root in \((0, t^{0})\).

     
  3. (c)

    For given \(l\geq0\), if \(v>0\) solves \(c(t,lt)=0\), then \(c(t,lt)>0\) for all \(t>v\) and \(c(t,lt)<0\) for all \(0< t<v\).

     

Proof

We only prove the \(\operatorname{BPMA}(1)\) case here, the \(\operatorname{BPAR}(1)\) case could be proved the same way.

Let \(s=lt\), for some given \(l\geq0\),
$$\begin{aligned} \frac{dc(t,lt)}{dt} =& \lambda_{1}m_{X}^{\prime}(t) \bigl(2\alpha_{1}m_{X}(t)+\bar{\alpha}_{1} \bigr) +\lambda_{2}lm_{Y}^{\prime}(lt) \bigl(2 \alpha_{2}m_{Y}(lt)+\alpha_{2}\bigr) \\ &{}+ \lambda\bigl[ m_{X}^{\prime}(t) \bigl(2 \alpha_{1}m_{X}(t)+\bar{\alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}^{2}(lt)+\bar{ \alpha}_{2}m_{Y}(lt)\bigr) \\ &{}+ lm_{Y}^{\prime}(lt) \bigl(\alpha_{1}m_{X}^{2}(t)+ \bar{\alpha}_{1}m_{X}(t)\bigr) \bigl(2 \alpha_{2}m_{Y}(lt)+\alpha_{2}\bigr)\bigr] \\ &{}- (1+\rho) (1+\alpha_{1}) (\lambda_{1}+\lambda) \mu_{1} - l(1+\rho) (1+\alpha_{2}) (\lambda_{2}+ \lambda)\mu_{2}, \end{aligned}$$
so that
$$\begin{aligned} {\biggl.\frac{dc(t,lt)}{dt}\biggr|_{t=0} } =& (1+\alpha_{1}) ( \lambda_{1}+\lambda)\mu_{1}+l(1+\alpha_{2}) ( \lambda_{2}+\lambda)\mu _{2} \\ &{}- (1+\rho) (1+\alpha_{1}) (\lambda_{1}+\lambda) \mu_{1}-l(1+\rho) (1+\alpha _{2}) (\lambda_{2}+ \lambda)\mu_{2} < 0. \end{aligned}$$
For every \(t>0\) and \(l\geq0\), we have \(\frac{d^{2}c(t,lt)}{dt^{2}}>0\).

This means that the function \(c(t,s)\) is convex in \(t\in(0,t^{0})\), since \(c_{1}^{\prime}(0,0)<0\) and \(c_{2}^{\prime}(0,0)<0\), \(m_{X}^{\prime}(t)\), and \(m_{Y}^{\prime}(s)\) are the monotone increasing functions of t and s. Obviously \(\frac{dc(t,lt)}{dt}\) is a monotone increasing function of t; given \(l\geq0\), there exist \(t_{1}>0\) and some \(l>0\) so that \(\frac{dc(t_{1},lt_{1})}{dt_{1}}>0\), and (a) and (b) are proved. Hence the equation for \(c(t,lt)\) can have at most one root in \((0,t^{0})\).

(c) The result is obvious from the convexity of \(c(t,lt)\) on \((0,t^{0})\). □

Lemma 4.2

Referring to Lemma  4.1, and letting \(\Delta=\{(t,s);c(t,s)=0,(t,s)\in G\}\), then
  1. (a)

    \(0<\frac{dc(t,s)}{dt}+\frac{dc(t,s)}{ds}<\infty\) for \((t,s)\in\Delta\);

     
  2. (b)

    \(c_{n}(t,s)<\infty\) for \((t,s)\in\Delta\).

     

Proof

(a) There exists \(l\geq0\) such that \(s=lt\), then \(c(t,s)=c(t,lt)=0\) for any \((t,s)\in\Delta\). By the intermediate point in the mean value theorem, there exists \(\xi\in(0,t)\) such that \(c(t,lt)-c(0,0)=c_{1}^{\prime}(\xi,l\xi)t+c_{2}^{\prime}(\xi,l\xi)lt=0\). Since \(\frac{d^{2}c(t,lt)}{dt^{2}}>0\) for \(t\geq0\), \(c_{1}^{\prime}(t,lt)+c_{2}^{\prime}(t,lt)l>c_{1}^{\prime}(\xi,l\xi)+c_{2}^{\prime}(\xi,l\xi)l=0\). Varying l from 0 to ∞, the conclusion is proved.

For (b), since for \(\operatorname{BPMA}(1)\) process,
$$\begin{aligned} c_{n}(t,s) =& (n-1)c(t,s) + \lambda_{1} \bigl[(1+\alpha_{1})m_{X}(t)+ \bar{\alpha}_{1}-2 \bigr] + \lambda_{2} \bigl[(1+\alpha_{2})m_{Y}(s)+ \bar{\alpha}_{2}-2 \bigr] \\ &{}+ \lambda \bigl[\bigl(\alpha_{1}m_{X}(t)+\bar{ \alpha}_{1}\bigr) \bigl(\alpha_{2}m_{Y}(s)+\bar{ \alpha} _{2}\bigr)+m_{X}(t)m_{Y}(s)-2 \bigr]< \infty \end{aligned}$$
for \(\operatorname{BPAR}(1)\) process, we just present \(0<\alpha_{1},\alpha_{2}<1\) case,
$$\begin{aligned} c_{n}(t,s) =& (n-1)c(t,s) \\ &{}+ \lambda_{1} \biggl\{ \frac{m_{X}(t)[1-A_{n}\{m_{X}(t)\}+\bar{\alpha}_{1}]}{1-\alpha_{1}m_{X}(t)} +A_{n}\bigl\{ m_{X}(t)\bigr\} -2 \biggr\} \\ &{}+ \lambda_{2} \biggl\{ \frac{m_{Y}(s)[1-B_{n}\{m_{Y}(s)\}+\bar{\alpha}_{2}]}{1-\alpha_{1}m_{Y}(s)} +B_{n}\bigl\{ m_{Y}(s)\bigr\} -2 \biggr\} \\ &{}+ \lambda \biggl\{ \frac{m_{X}(t)m_{Y}(s)}{1-\alpha_{1}\alpha_{2}m_{X}(t)m_{Y}(s)}L' +A_{n}\bigl\{ m_{X}(t)\bigr\} B_{n}\bigl\{ m_{Y}(s)\bigr\} -2 \biggr\} , \end{aligned}$$
where
$$\begin{aligned} L' =& 1-A_{n-1}\bigl\{ m_{X}(t)\bigr\} B_{n-1}\bigl\{ m_{Y}(s)\bigr\} + \frac{\alpha_{1}\bar{\alpha}_{2}m_{X}(t)[1-A_{n-1}\{m_{X}(t)\}]}{1-\alpha _{1}m_{X}(t)} \\ &{}+ \frac{\bar{\alpha}_{1}\alpha_{2}m_{Y}(s)[1-B_{n-1}\{m_{Y}(s)\}]}{1-\alpha_{2}m_{Y}(s)}, \end{aligned}$$
since \(\alpha_{1}m_{X}(t)<1\) and \(\alpha_{2}m_{Y}(s)<1\) for every point \((t,s)\in\Delta\), recalling the expressions of \(A_{n}\{m_{X}(t)\}\) and \(B_{n}\{m_{Y}(s)\}\) we gave in the Appendix, \(c_{n}(t,s)<\infty\). So, the conclusion is proved. Here actually, Δ is a smooth curve on the first quadrant. □

As for the ruin problems of the bidimensional risk models, many authors just gave the upper bounds for \(\Psi_{\mathrm{max}}(u_{1},u_{2})\) via martingale inequalities (see Chan et al. [1] and Li et al. [3]) for Lévy processes, because the Cramér-Lundberg constants are hardly to be obtained. Since our risk models are non-Lévy processes, many classical results of ruin theory, especially the Wald martingale theorem, cannot be applied. The large deviations theorem has been introduced to approximate the ruin probabilities by Glynn and Whitt [20]. However, their attention mainly adhered to the univariate contexts. We borrow the idea of Glynn and Whitt [20] and extend their main results to the bivariate contexts.

Theorem 4.1

For the two risk models, \(\Psi_{\mathrm{max}}(u_{1},u_{2}) \stackrel{\mathrm{log}}{\sim}\inf_{(t,s)\in\Delta}{e^{-tu_{1}-su_{2}}}\) as \(u_{1},u_{2}\rightarrow\infty\), where \((t^{*},s^{*})=\arg\inf_{(t,s)\in\Delta}\{e^{-tu_{1}-su_{2}}\}\) is the adjustment coefficient.

Proof

See the Appendix. □

For \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\) is actually the univariate probability measure. By the theorem of the univariate large deviations, it is easy to get the following result.

Theorem 4.2

For the two risk models, \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\stackrel{\mathrm{log}}{\sim }e^{-t^{*}(u_{1}+u_{2})}\) as \(u_{1}+u_{2}\rightarrow\infty\), where \(t^{*}=\arg\{ t;c(t,t)=0, t>0\}\) is the adjustment coefficient.

Proof

The joint m.g.f. of \(S_{1n}+S_{2n}-n\pi_{1}-n\pi_{2}\) is \(c_{n}(t,t)\), and the expression for adjustment coefficient function is \(c(t,t)\), then, by the univariate large deviations theory proposed by Glynn and Whitt [20], we get the proof of the result. □

Theorem 4.3

For our two models, we have
$$ \Psi_{\mathrm{min}}(u_{1},u_{2})\leq\Psi_{1}(u_{1})+ \Psi_{2}(u_{2})-\Psi_{\mathrm{max}}(u_{1},u_{2}), $$
where \(\Psi_{1}(u_{1})\) and \(\Psi_{2}(u_{2})\) are the marginal ruin probabilities of the first and the second businesses, respectively. Furthermore, the approximations to \(\Psi_{1}(u_{1})\) and \(\Psi_{2}(u_{2})\) are presented as
$$\begin{aligned}& \Psi_{1}(u_{1})\stackrel{\mathrm{log}}{ \sim}e^{-t'u_{1}}, \quad u_{1}\rightarrow\infty, \end{aligned}$$
(15)
$$\begin{aligned}& \Psi_{2}(u_{2})\stackrel{\mathrm{log}}{ \sim}e^{-s'u_{2}},\quad u_{2}\rightarrow\infty, \end{aligned}$$
(16)
where for model \(\operatorname{BPMA}(1)\), \(t'\) and \(s'\) are the positive roots of \((\lambda_{1}+\lambda)(\alpha_{1}m_{X}^{2}(t')+\bar{\alpha}_{1}m_{X}(t')-1)-\pi_{1}t'\) and \((\lambda_{Y}+\lambda)(\alpha_{2}m_{Y}^{2}(s')+\bar{\alpha}_{2}m_{Y}(s')-1)-\pi _{2}s'\), respectively; and for model \(\operatorname{BPAR}(1)\), \(t'\) and \(s'\) are the positive roots of \((\lambda_{1}+\lambda)(\frac{\bar{\alpha} _{1}m_{X}(t')}{1-\alpha_{1}m_{X}(t')}-1)-\pi_{1}t'\) and \((\lambda_{2}+\lambda)(\frac {\bar{\alpha}_{2}m_{Y}(s')}{1-\alpha_{2}m_{Y}(s')}-1)-\pi_{2}s'\), respectively.

Proof

For the approximations to marginal ruin probabilities see Cossétte et al. [15]. □

Remark 4.1

For \(\Psi_{\mathrm{min}}(u_{1}, u_{2})\), since \(\Psi _{\mathrm{max}}(u_{1}, u_{2})=o(e^{-t'u_{1}-s'u_{2}})\),
$$ \Psi_{\mathrm{min}}(u_{1}, u_{2})\sim \Psi(u_{1})+\Psi(u_{2}), \quad \text {as } u_{1},u_{2}\rightarrow\infty. $$

5 Numerical experiments and simulations

5.1 Calculations for adjustment coefficients

Let \(u_{1}:u_{2}=2:3\), \(\lambda_{1}=5\), \(\lambda_{2}=3\), \(\lambda=2\), X, and Y follow the exponential distributions with parameters \(\beta_{1}\) and \(\beta_{2}\), respectively, i.e. \(\mu_{1}=1/\beta_{1}\), \(m_{X}(t)=\frac {1}{1-t/\beta_{1}}\), \(t<\beta_{1}\), and \(\mu_{2}=1/\beta_{2}\), \(m_{Y}(s)=\frac {1}{1-s/\beta_{2}}\), \(s<\beta_{2}\); the safety loading coefficient \(\rho =0.2\) for two classes of businesses. We calculate out three groups of adjustment coefficients. The first group of adjustment coefficients, \((t^{*},s^{*})\) for \(\Psi _{\mathrm{max}}(u_{1},u_{2})\); the second group of adjustment coefficients, \(t^{*}\) for \(\Psi_{\mathrm{sum}}(u_{1},u_{2})\); and the third group of adjustment coefficients \(t'\) and \(s'\) for \(\Psi_{1}(u_{1})\) and \(\Psi_{2}(u_{2})\), respectively. For Tables 1-6, we arrange the values of \(\alpha_{1}\) in vertical rows and the values of \(\alpha_{2}\) in horizontal rows.
Table 1

The adjustment coefficient \(\pmb{(t^{*},s^{*})}\) of \(\pmb{\Psi_{\mathrm{max}}}\) for model \(\pmb{\operatorname{BPMA}(1)}\)

 

\(\boldsymbol{(\alpha_{1},\alpha_{2})}\)

0

0.25

0.5

0.75

1

\((t^{*},s^{*})\)

0

(0.2279,0.8665)

(0.2483,0.7304)

(0.2618,0.6692)

(0.2735,0.6319)

(0.2839,0.6061)

0.25

(0.1633,0.8835)

(0.1802,0.7474)

(0.1907,0.6865)

(0.1998,0.6496)

(0.2078,0.6242)

0.5

(0.1265,0.8938)

(0.1413,0.7577)

(0.1500,0.6972)

(0.1573,0.6606)

(0.1637,0.6355)

0.75

(0.1016,0.9014)

(0.1147,0.7653)

(0.1219,0.7052)

(0.1279,0.6689)

(0.1332,0.6442)

1

(0.0832,0.9073)

(0.0950,0.7714)

(0.1010,0.7116)

(0.1058,0.6757)

(0.1101,0.6513)

Table 2

The adjustment coefficient \(\pmb{(t^{*},s^{*})}\) of \(\pmb{\Psi_{\mathrm{max}}}\) for model \(\pmb{\operatorname{BPAR}(1)}\)

 

\(\boldsymbol{(\alpha_{1},\alpha_{2})}\)

0

0.25

0.5

0.75

0.95

\((t^{*},s^{*})\)

0

(0.2279,0.8665)

(0.2674,0.6271)

(0.3142,0.3926)

(0.3671,0.1718)

(0.4004,0.0264)

0.25

(0.1386,0.8914)

(0.1709,0.6499)

(0.2116,0.4111)

(0.2620,0.1860)

(0.2991,0.0273)

0.5

(0.0611,0.9172)

(0.0833,0.6570)

(0.1139,0.4333)

(0.1571,0.1963)

(0.1974,0.0289)

0.75

(0.0067,0.9405)

(0.0158,0.7000)

(0.0306,0.4586)

(0.0570,0.2160)

(0.0943,0.0328)

0.95

(0.0000,0.9444)

(0.0000,0.7083)

(0.0000,0.4722)

(0.0001,0.2360)

(0.0114,0.0433)

Table 3

The adjustment coefficient \(\pmb{t^{*}}\) of \(\pmb{\Psi_{\mathrm{sum}}}\) for model \(\pmb{\operatorname{BPMA}(1)}\)

 

\(\boldsymbol{(\alpha_{1},\alpha_{2})}\)

0

0.25

0.5

0.75

1

\(t^{*}\)

0

0.3922

0.4074

0.4133

0.4178

0.4212

0.25

0.3226

0.3355

0.3453

0.3531

0.3593

0.5

0.2788

0.2926

0.3035

0.3123

0.3196

0.75

0.2493

0.2630

0.2741

0.2832

0.2908

1

0.2278

0.2411

0.2519

0.2610

0.2687

Table 4

The adjustment coefficient \(\pmb{t^{*}}\) of \(\pmb{t^{*}}\) of \(\pmb{\Psi_{\mathrm{sum}}}\) for model \(\pmb{\operatorname{BPAR}(1)}\)

 

\(\boldsymbol{(\alpha_{1},\alpha_{2})}\)

0

0.25

0.5

0.75

0.95

\(t^{*}\)

0

0.3922

0.3999

0.3565

0.2182

0.0468

0.25

0.2855

0.2994

0.2947

0.2070

0.0467

0.5

0.1707

0.1851

0.1996

0.1782

0.0463

0.75

0.0686

0.0751

0.0854

0.0998

0.0447

0.95

0.0096

0.0100

0.0108

0.0128

0.0200

Table 5

Adjustment coefficients of the marginal ruin probabilities for model \(\pmb{\operatorname{BPMA}(1)}\)

α

0

0.25

0.5

0.75

1

\(t'\)

0.1667

0.1396

0.1265

0.1186

0.1134

\(s'\)

0.9444

0.8130

0.7562

0.7229

0.7008

Table 6

Adjustment coefficients of the marginal ruin probabilities for model \(\pmb{\operatorname{BPAR}(1)}\)

α

0

0.25

0.5

0.75

0.95

\(t'\)

0.1583

0.1250

0.0833

0.0417

0.0083

\(s'\)

0.8972

0.7083

0.4722

0.2361

0.0472

5.2 Calculations for marginal VaR

In this subsection, we give the marginal VaR values for our two different models with the assumptions that \(X_{ij}\) and \(Y_{ij}\) are of mutually independent exponential r.v.’s with parameter 1, \(i,j=1,2,\ldots\) . Then, according to Propositions 3.2 and 3.3, we derive the asymptotic densities of \(\mathbf{S}_{n}\) for large n as
$$\begin{aligned} f_{\mathbf{S}_{n}}(x,y) =& e^{-n(\lambda_{1}+\lambda_{2}+\lambda)} \\ &{}+e^{-n(\lambda_{1}+\lambda_{2}+\lambda)}\sum_{k_{1}=1}^{\infty}\sum _{k_{2}=1}^{\infty}\sum _{k=0}^{k_{1}\wedge k_{2}} \frac{n^{k_{1}+k_{2}-k}\lambda_{1}^{k_{1}-k}\lambda_{2}^{k_{2}-k}\lambda ^{k}}{(k_{1}-k)!(k_{2}-k)!k!} \\ &{}\times \sum_{i=0}^{k_{1}}\left ( \begin{array}{@{}c@{}} k_{1} \\ i \end{array} \right )\alpha_{1}^{i} \bar{\alpha}_{1}^{k_{1}-i}\operatorname{Ga}(x;k_{1}+i,1) \\ &{}\times \sum_{j=0}^{k_{2}}\left ( \begin{array}{@{}c@{}} k_{2} \\ j \end{array} \right )\alpha_{2}^{j} \bar{\alpha}_{2}^{k_{2}-j}\operatorname{Ga}(y;k_{1}+j,1) \end{aligned}$$
and
$$\begin{aligned} f_{\mathbf{S}_{n}}(x,y) =& e^{-n(\lambda_{1}+\lambda_{2}+\lambda)} \\ &{}+ e^{-n(\lambda_{1}+\lambda_{2}+\lambda)} \sum_{k_{1}=1}^{\infty}\sum _{k_{2}=1}^{\infty}\sum _{k=0}^{k_{1}\wedge k_{2}} \frac{n^{k_{1}+k_{2}-k}\lambda_{1}^{k_{1}-k}\lambda_{2}^{k_{2}-k}\lambda ^{k}}{(k_{1}-k)!(k_{2}-k)!k!} \\ &{}\times \bar{\alpha}_{1}^{k_{1}} \Biggl[\sum _{i=1}^{\infty}\alpha _{1}^{i-1} \operatorname{Ga}(x;i,1) \Biggr]^{*k_{1}} \bar{\alpha}_{2}^{k_{2}} \Biggl[\sum_{j=1}^{\infty}\alpha_{2}^{j-1} \operatorname{Ga}(y;j,1) \Biggr]^{*k_{2}} \end{aligned}$$
for model \(\operatorname{BPMA}(1)\) and model \(\operatorname{BPAR}(1)\), respectively, and where \(\operatorname{Ga}(x;i,1)\) and \(\operatorname{Ga}(y;j,1)\) are gamma densities with parameters \((i, 1)\) and \((j, 1)\) for the variables x and y, respectively. And we also get the marginal densities for model \(\operatorname{BPMA}(1)\):
$$\begin{aligned}& f_{S_{1n}}(x) = e^{-n(\lambda_{1}+\lambda)}+e^{-n(\lambda_{1}+\lambda)}\sum _{k_{1}=1}^{\infty }\frac{[n(\lambda_{1}+\lambda)]^{k_{1}}}{k_{1}!} \\& \hphantom{f_{S_{1n}}(x) =}{}\times \sum_{i=0}^{k_{1}} \left ( \begin{array}{@{}c@{}} k_{1} \\ i \end{array} \right )\alpha_{1}^{i} \bar{\alpha}_{1}^{k_{1}-i}\operatorname{Ga}(x;k_{1}+i,1), \\& f_{S_{2n}}(y) = e^{-n(\lambda_{2}+\lambda)}+e^{-n(\lambda_{2}+\lambda)}\sum _{k_{2}=1}^{\infty }\frac{[n(\lambda_{2}+\lambda)]^{k_{2}}}{k_{2}!} \\& \hphantom{f_{S_{2n}}(y) =}{}\times \sum_{j=0}^{k_{2}} \left ( \begin{array}{@{}c@{}} k_{2} \\ j \end{array} \right )\alpha_{2}^{j} \bar{\alpha}_{2}^{k_{2}-j}\operatorname{Ga}(y;k_{2}+j,1); \end{aligned}$$
and the marginal densities for model \(\operatorname{BPAR}(1)\) for \(0<\alpha_{1},\alpha_{2}<1\):
$$\begin{aligned}& f_{S_{1n}}(x) = e^{-n(\lambda_{1}+\lambda)}+e^{-n(\lambda_{1}+\lambda)}\sum _{k_{1}=1}^{\infty }\frac{[n(\lambda_{1}+\lambda)]^{k_{1}}}{k_{1}!} \\& \hphantom{f_{S_{1n}}(x) =}{} \times \bar{\alpha}_{1}^{k_{1}} \Biggl[\sum _{i=1}^{\infty}\alpha_{1}^{i-1} \operatorname{Ga}(x;i,1) \Biggr]^{*k_{1}}, \\& f_{S_{2n}}(y) = e^{-n(\lambda_{2}+\lambda)}+e^{-n(\lambda_{2}+\lambda)}\sum _{k_{2}=1}^{\infty }\frac{[n(\lambda_{2}+\lambda)]^{k_{2}}}{k_{2}!} \\& \hphantom{f_{S_{2n}}(y) =}{} \times \bar{\alpha}_{2}^{k_{2}} \Biggl[\sum _{j=1}^{\infty}\alpha_{2}^{j-1} \operatorname{Ga}(y;j,1) \Biggr]^{*k_{2}}. \end{aligned}$$
So, \(\operatorname{VaR}_{S_{1n}}(\cdot)=\operatorname{VaR}_{S_{2n}}(\cdot)\) if \(\alpha_{1}=\alpha_{2}\) and \(\lambda_{1}=\lambda_{2}\) at the same levels for the two models, denoted as \(\operatorname{VaR}_{S_{n}}(\cdot)\). Then the marginal VaR for each one is presented in Tables 7 and 8.
Table 7

Marginal VaR for model \(\pmb{\operatorname{BPMA}(1)}\)

α

0

0.25

0.5

0.75

1

\(\operatorname{VaR}_{S_{5}}(0.90)\)

15.98

20.60

24.10

30.20

35.28

\(\operatorname{VaR}_{S_{5}}(0.95)\)

18.12

23.21

28.40

33.68

39.05

Given \(\lambda_{1}=\lambda_{2}=\lambda=1\), n = 5, the marginal VaR at level θ = 0.90,0.95.

Table 8

Marginal VaR for model \(\pmb{\operatorname{BPAR}(1)}\)

α

0

0.25

0.5

0.75

0.95

\(\operatorname{VaR}_{S_{3}}(0.90)\)

10.65

13.60

17.42

22.41

27.33

\(\operatorname{VaR}_{S_{3}}(0.95)\)

12.46

15.75

20.11

25.60

30.93

Given \(\lambda_{1}=\lambda_{2}=\lambda=1\), n = 3, the marginal VaR at level θ = 0.90,0.95.

6 Conclusions and comments

In this paper, we propose a class of bidimensional discrete-time risk models whose bivariate claim counts obey the \(\operatorname{BPMA}(1)\) and \(\operatorname{BPAR}(1)\) processes, we derive their adjustment coefficients functions and the asymptotic distributions for the bivariate compound claim processes in finite time, we obtain the upper bounds for three types of ruin probabilities by large deviations theory, and we present examples to compute the three types of the adjustment coefficients for their corresponding ruin probabilities as well as the marginal VaR values.

However, there are many further works that can be done. We can extend the assumptions to the general ones that the bivariate claim seizes are copula distributed in common shocks, and the distributions of the claim sizes of two businesses are heavy-tailed distributed. Then the approximations to the three types of ruin probabilities should be discussed in another way.

Declarations

Acknowledgements

The author is grateful to the referees for their useful and constructive suggestions. This work is supported by National Natural Science Foundation of China (Nos. 11271155, 11371168, 11001105, 11071126, 11071269), Specialized Research Fund for the Doctoral Program of Higher Education (No. 20110061110003), Scientific Research Fund of Jilin University (No. 201100011), and Jilin Province Natural Science Foundation (20130101066JC, 20130522102JH, 20101596).

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
Institute of Mathematics, Jilin University, Qianjin Road 2699, Changchun, 130012, China
(2)
School of Mathematics, Jilin University, Qianjin Road 2699, Changchun, 130012, China

References

  1. Chan, W-S, Yang, H, Zhang, L: Some results on ruin probabilities in a two-dimensional risk model. Insur. Math. Econ. 32, 345-358 (2003) View ArticleMATHMathSciNetGoogle Scholar
  2. Yuen, KC, Guo, J, Wu, X: On the first time of ruin in the bivariate compound Poisson model. Insur. Math. Econ. 38, 298-308 (2006) View ArticleMATHMathSciNetGoogle Scholar
  3. Li, J, Liu, Z, Tang, Q: On the ruin probabilities of a bidimensional perturbed risk model. Insur. Math. Econ. 41, 185-195 (2007) View ArticleMATHMathSciNetGoogle Scholar
  4. Avram, F, Palmowski, Z, Pistorius, M: A two-dimensional ruin problem on the positive quadrant. Insur. Math. Econ. 42, 227-234 (2008) View ArticleMATHMathSciNetGoogle Scholar
  5. Badescu, A, Cheung, E, Rabehasaina, A: Two-dimensional risk model with proportional reinsurance. J. Appl. Probab. 48(3), 749-765 (2011) View ArticleMATHMathSciNetGoogle Scholar
  6. Cai, J, Li, J: Dependence properties and bounds for ruin probabilities in multivariate compound risk models. J. Multivar. Anal. 98, 757-773 (2007) View ArticleMATHMathSciNetGoogle Scholar
  7. Dang, L, Zhu, N, Zhang, H: Survival probability for a two-dimensional risk model. Insur. Math. Econ. 44, 491-496 (2009) View ArticleMATHMathSciNetGoogle Scholar
  8. Chen, Y, Yuen, KC, Ng, KW: Asymptotics for the ruin probabilities of a two-dimensional renewal risk model with heavy-tailed claims. Appl. Stoch. Models Bus. Ind. 27, 290-300 (2011) View ArticleMATHMathSciNetGoogle Scholar
  9. Al-Osh, MA, Alzaid, AA: First-order integer-valued autoregressive (\(\operatorname{INAR}(1)\)) process. J. Time Ser. Anal. 8, 261-275 (1987) View ArticleMATHMathSciNetGoogle Scholar
  10. Al-Osh, MA, Alzaid, AA: Integer-valued moving average (INMA) process. Stat. Pap. 29, 281-300 (1988) View ArticleMATHMathSciNetGoogle Scholar
  11. McKenzie, E: Autoregressive-moving average processes with negative binomial and geometric marginal distributions. Adv. Appl. Probab. 18, 679-705 (1986) View ArticleMATHMathSciNetGoogle Scholar
  12. McKenzie, E: Some ARMA models for dependent sequences of Poisson counts. Adv. Appl. Probab. 20(4), 822-835 (1988) View ArticleMATHMathSciNetGoogle Scholar
  13. Quoreshi, S: Bivariate time series modeling of financial count data. Commun. Stat., Theory Methods 35(7), 1343-1358 (2006) View ArticleMATHMathSciNetGoogle Scholar
  14. Pedeli, X, Karlis, D: A bivariate \(\operatorname{INAR}(1)\) process with application. Stat. Model. 11(4), 325-349 (2011) View ArticleMathSciNetGoogle Scholar
  15. Cossétte, H, Marceau, E, Maume-Deschamps, V: Discrete-time risk models based on time series for count random variables. ASTIN Bull. 40, 123-150 (2009) View ArticleGoogle Scholar
  16. Cossétte, H, Marceau, E, Toureille, F: Risk models based on time series for count random variables. Insur. Math. Econ. 48, 19-28 (2011) View ArticleMATHGoogle Scholar
  17. Marshall, A, Olkin, I: A family of bivariate distributions generated by the bivariate Bernoulli distribution. J. Am. Stat. Assoc. 80, 332-338 (1985) View ArticleMATHMathSciNetGoogle Scholar
  18. Nyrhinen, H: Rough descriptions of ruin for a general class of surplus processes. Adv. Appl. Probab. 30, 1008-1026 (1998) View ArticleMATHMathSciNetGoogle Scholar
  19. Müller, A, Pflug, G: Asymptotic ruin probabilities for risk processes with dependent increments. Insur. Math. Econ. 28, 381-392 (2001) View ArticleMATHGoogle Scholar
  20. Glynn, P, Whitt, W: Logarithm asymptotics for steady-state tail probabilities in a single-server queue. J. Appl. Probab. 31, 131-156 (1994) View ArticleMathSciNetGoogle Scholar

Copyright

Advertisement