Skip to main content

The inverse moment for widely orthant dependent random variables

Abstract

In this paper, we investigate approximations of the inverse moment model by widely orthant dependent (WOD) random variables. Let \(\{Z_{n},n\geq1\}\) be a sequence of nonnegative WOD random variables, and \(\{w_{ni},1\leq i\leq n,n\geq 1\}\) be a triangular array of nonnegative nonrandom weights. If the first moment is finite, then \(E(a+ \sum_{i=1}^{n}w_{ni}Z_{i})^{-\alpha}\sim (a+\sum_{i=1}^{n}w_{ni}EZ_{i})^{-\alpha}\) for all constants \(a>0\) and \(\alpha>0\). If the rth moment (\(r>2\)) is finite, then the convergence rate is presented as \(\frac{E(a+\sum_{i=1}^{n}w_{ni}Z_{i})^{-\alpha}}{(a+\sum_{i=1}^{n}w_{ni}EZ_{i})^{-\alpha}}-1=O(\frac{1}{(a+\sum_{i=1}^{n}w_{ni}EZ_{i})^{1-2\beta/r}})\), where \(\beta\geq0\) and \(2\beta/r<1\). Finally, some simulations illustrate the results. We generalize some corresponding results.

Introduction

In this paper, we investigate approximations of the inverse moments of nonnegative and dependent random variables. Let \(\{Z_{n},n\geq1\}\) be a sequence of nonnegative random variables with finite second moments. Denote the normalized sums for \(\{Z_{n},n\geq1\}\) by \(X_{n}=\frac{1}{\sigma_{n}}\sum_{i=1}^{n}Z_{i}\), where \(\sigma_{n}^{2}=\sum_{i=1}^{n} \operatorname{Var} (Z_{i})\). Under some suitable conditions, the inverse moment can be approximated by the inverse of the moment. More precisely, we prove that

$$ E \biggl(\frac{1}{(a+ X_{n})^{\alpha}} \biggr)\sim \frac{1}{(a+EX_{n})^{\alpha}} $$
(1.1)

for all \(a>0\) and \(\alpha>0\), where \(c_{n}\sim d_{n}\) means that \(c_{n}/d_{n}\rightarrow1\) as \(n\rightarrow\infty\).

Generally, the computation of the exact αth inverse moment of \(a+X_{n}\) is more difficult than that of the αth inverse of the moment of \(a+X_{n}\). The αth inverse moment of \(a+X_{n}\) often plays an important role in many statistical applications such as Stein estimation, life testing problems, evaluating risks of estimators, evaluating the power of test statistics, financial and insurance mathematics, change-point analysis, and so on. These fields are usually require high accuracy. Several authors have studied approximations to the inverse moments and their applications. For example, Chao and Strawderman [1] studied the inverse moments of the binomial and Poisson random variables, Pittenger [2] obtained some sharp mean and variance bounds for Jensen-type inequalities, Cribari-Neto et al. [3] used the integral method to investigate the inverse moments for binomial random variables, Fujioka [4] investigated the inverse moments for chi-squared random variables, Hsu [5] and Inclán and Tiao [6] studied the change-point analysis, which need to compute some inverse moments for gamma and chi-squared random variables, etc.

Under some asymptotic normally condition, relation (1.1) was established by Garcia and Palacios [7]. Kaluszka and Okolewski [8] modified the assumptions of Garcia and Palacios [7] and obtained (1.1) for the nonnegative and independent sequence \(\{Z_{n},n\geq1\}\) satisfying the condition \(\sum_{i=1}^{n} E|Z_{i}-EZ_{i}|^{3}=o(\sigma_{n}^{3})\). Hu et al. [9] considered the weaker condition \(\sum_{i=1}^{n} E|Z_{i}-EZ_{i}|^{3}=o(\sigma_{n}^{2+\delta})\) for some \(\delta\in(0,1]\). On the one hand, Wu et al. [10] used the truncated method and exponential inequalities of random variables to study the inverse moment (1.1) when \(\{Z_{n},n\geq1\}\) satisfies the analogous Linderberg condition \(\sigma_{n}^{-2}\sum_{i=1}^{n}E\{Z_{i}^{2}I(Z_{i}>\eta \sigma_{n})\}\rightarrow0\) as \(n\rightarrow\infty\) for some \(\eta>0\). Wang et al. [11] and Shen [12] extended the results of Wu et al. [10] to the dependent cases of NOD random variables and ρ-mixing random variables, respectively. On the other hand, Sung [13] applied a Rosenthal-type inequality to establish (1.1) under the assumption that a nonnegative sequence \(\{Z_{n},n\geq1\}\) satisfies the analogous Linderberg condition. Xu and Chen [14] used a Rosehthal-type inequality to investigate the inverse moments of nonnegative NOD random variables. Hu et al. [15] established (1.1) under the first moment condition of \(\{Z_{n},n\geq1\}\), where \(X_{n}=\frac{1}{M_{n}}\sum_{i=1}^{n} Z_{i}\), and \(\{M_{n}\}\) is a sequence of positive real numbers. Shen [16] also obtained (1.1) for nonnegative random variables satisfying a Rosenthal-type inequality.

Moreover, Shi et al. [17] obtained the inverse moment (1.1) for \(X_{n}=\sum_{i=1}^{n}Z_{i}\). Horng et al. [18] also considered the inverse moment (1.1) for \(X_{n}=\frac{1}{B_{n}}\sum_{i=1}^{n} Z_{i}\), where \(\{B_{n}\}\) is a sequence of nondecreasing positive real numbers. Yang et al. [19] established (1.1) for \(X_{n}=\sum_{i=1}^{n}Z_{i}\) and obtained a convergence rate for it. Shi et al. [20] applied the Taylor series expansion to obtain a better convergence rate of (1.1) for \(X_{n}=\sum_{i=1}^{n}Z_{i}\).

In this paper, we investigate the inverse moments (1.1) for the double-indexed weighted case, that is, for \(X_{n}=\sum_{i=1}^{n}w_{ni}Z_{i}\), where \(\{w_{ni},1\leq i\leq n,n\geq 1\}\) is a triangular array of nonnegative nonrandom weights, and \(\{Z_{n},n\geq1\}\) is a sequence of nonnegative and widely orthant dependent (WOD) random variables. Now, we recall the definition of WOD random variables.

Definition 1.1

Let \(\{Z_{n},n\geq1\}\) be a sequence of random variables. If there exists a finite sequence of real numbers \(\{g_{u}(n),n\geq1\}\) such that, for each \(n\geq1\) and for all \(z_{i}\in(-\infty, \infty)\), \(1\leq i\leq n\),

$$P \Biggl(\bigcap_{i=1}^{n}(Z_{i}>z_{i}) \Biggr)\leq g_{u}(n)\prod_{i=1}^{n}P(Z_{i}>z_{i}), $$

then we say that the random variables \(\{Z_{n},n\geq1\}\) are widely upper orthant dependent (WUOD). If there exists a finite sequence of real numbers \(\{g_{l}(n),n\geq1\}\) such, that for each \(n\geq1\) and for all \(z_{i}\in(-\infty, \infty)\), \(1\leq i\leq n\),

$$P \Biggl(\bigcap_{i=1}^{n}(Z_{i} \leq z_{i}) \Biggr)\leq g_{l}(n)\prod _{i=1}^{n}P(Z_{i}\leq z_{i}), $$

then we say that the random variables \(\{Z_{n},n\geq1\}\) are widely lower orthant dependent (WLOD). If the random variables \(\{Z_{n},n\geq1\}\) are both WUOD and WLOD, then we say that they are widely orthant dependent (WOD).

For some \(1\leq i\leq n\), \(n\geq1\), if \(P(Z_{i}>z_{i})=0\), then \(P(\bigcap_{i=1}^{n}(Z_{i}>z_{i}))=0\). Similarly, if some \(1\leq i\leq n\), \(n\geq1\), we have \(P(Z_{i}\leq z_{i})=0\), then \(P(\bigcap_{i=1}^{n}(Z_{i}\leq z_{i}))=0\). Define \(\frac{0}{0}=1\). By Definition 1.1 we can find that \(g_{u}(n)\geq 1\) and \(g_{l}(n)\geq1\) for all \(n\geq1\). Sometimes, we can take

$$g_{u}(n)=\sup_{z_{i}\in(-\infty,\infty),1\leq i\leq n}\frac{P (\bigcap_{i=1}^{n}(Z_{i}>z_{i}) )}{\prod_{i=1}^{n}P(Z_{i}>z_{i})},\quad n\geq 1, $$

and

$$g_{l}(n)=\sup_{z_{i}\in(-\infty,\infty),1\leq i\leq n}\frac{P (\bigcap_{i=1}^{n}(Z_{i}\leq z_{i}) )}{\prod_{i=1}^{n}P(Z_{i}\leq z_{i})},\quad n\geq1, $$

if \(g_{u}(n)<\infty\) and \(g_{l}(n)<\infty\) for all \(n\geq1\).

On the one hand, WOD random variables were introduced by Wang and Cheng [21] for risk model. Liu et al. [22], Wang et al. [23], He et al. [24], and Wang et al. [25] obtained more results on asymptotic properties of ruin probability in risk model under WOD random variables. On the other hand, for the limit theories and applications of WOD sequences, we refer to Shen [26] for some exponent-type inequalities, Wang et al. [27] for complete convergence results and application to nonparametric regression model estimation, Qiu and Chen [28] for complete moment convergence results, Yang et al. [29] for the Bahadur representation, Wang and Hu [30] for the consistency of the nearest neighbor estimator, etc.

If \(g_{u}(n)=g_{l}(n)\equiv1\), then WOD random variables are negatively orthant dependent (NOD) random variables (see Lehmann [31]). Joag-Dev and Proschan [32] gave the notion of negatively associated (NA) random variables. Recall that a family \(X=\{X_{t},t\in T\}\) of real-valued random variables is said to be normal (or Gaussian) system if all its finite-dimensional distributions are Gaussian. Let \(X=(X_{1},X_{2},\ldots,X_{n})\) ba a normal random vector, \(n\geq2\). Then Joag-Dev and Proschan [32] proved that it is NA if and only if its components are nonpositively correlated. Joag-Dev and Proschan [32] also pointed out that NA random variables are NOD random variables, but the converse statement is not true. Moreover, if \(M\geq1\) and \(g_{u}(n) = g_{l}(n) = M\), then WOD random variables form extended negatively dependent (END) random variables (see Liu [33, 34]). We also refer to Wang et al. [35, 36] and Hu et al. [37] for more information on END random variables.

Throughout the paper, we denote by \(C,C_{1},C_{2},\ldots\) positive constants independent of n and by \(C_{q},C_{1q},C_{2q},\ldots\) positive constants dependent only on q. Our results and simulations are presented in Section 2, and the proofs of the main results are presented in Section 3.

Main results and simulations

Let \(\{Z_{n},n\geq1\}\) be a sequence of nonnegative WOD random variables with the dominating coefficients \(g(n)=\max\{g_{u}(n), g_{l}(n)\}\), and \(\{w_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of nonnegative and nonrandom weights. Denote \(X_{n}=\sum_{i=1}^{n}w_{ni}Z_{i}\), \(\mu_{n}=EX_{n} \), and \(\mu_{n,s}=\sum_{i=1}^{n} w_{ni}E[Z_{i}I(Z_{i}\leq\mu_{n}^{s})]\) for some \(0< s<1\). We list some assumptions.

Assumption 2.1

  1. (A.1)

    \(g(n)=O(\mu_{n}^{\beta})\) for some \(\beta\geq0\);

  2. (A.2)

    \(\max_{1\leq i\leq n}w_{ni}=O(1)\);

  3. (A.3)

    \(\mu_{n}\rightarrow\infty\) as \(n\rightarrow\infty\);

  4. (A.4)

    \(\mu_{n}\sim\mu_{n,s}\) as \(n\rightarrow\infty\).

With the finite first moment, we get the following inverse moments.

Theorem 2.1

Let \(EZ_{n}<\infty\) for all \(n\geq1\), and let assumptions (A.1)-(A.4) hold. Then (1.1) holds for all constants \(a>0\) and \(\alpha>0\).

In the case of the finite rth moment (\(r>2\)), we establish the following convergence rates of inverse moments.

Theorem 2.2

Suppose that, for some \(r>2\), \(EZ_{n}^{r}<\infty\) for all \(n\geq1\) and

$$ \sum_{i=1}^{n}E|Z_{i}-EZ_{i}|^{r}=O \bigl((\mu_{n})^{r/2} \bigr) \quad\textit{and}\quad \sum _{i=1}^{n}\operatorname{Var}(Z_{i})=O( \mu_{n}). $$
(2.1)

Let conditions (A.1)-(A.4) be fulfilled and \(2\beta/r<1\). Then, for all \(a>0\) and \(\alpha>0\),

$$ \frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}-1=O \biggl(\frac {1}{(a+EX_{n})^{1-2\beta/r}} \biggr), $$
(2.2)

and, for all \(a>0\) and \(\alpha>1\),

$$ E \biggl(\frac{X_{n}}{(a+X_{n})^{\alpha}} \biggr)\Big/\frac{EX_{n}}{(a+EX_{n})^{\alpha }}-1=O \biggl( \frac{1}{(a+EX_{n})^{1-2\beta/r}} \biggr). $$
(2.3)

Remark 2.1

If a in (1.1) is replaced by \(a_{n}>0\) satisfying \(a_{n}\rightarrow\infty\) and \(a_{n}=o(EX_{n})\), then Theorem 2.1 and Theorem 2.2 still hold. In view of END random variables, NOD random variables, NA random variables, and independent random variables are WOD random variables with dominating coefficients \(g(n)=\max\{g_{u}(n), g_{l}(n)\}=O(1)\), so that condition (A.1) is fulfilled with \(\beta=0\). Therefore, we generalize the results of [1020] for the single-indexed weighted case or nonweighted case to the double-indexed weighted case under WOD random variables.

Simulation 2.1

We use the Monte Carlo method and MATLAB software to compute the approximations of inverse moments. For simplicity, let \(\{Z_{n},n\geq1\} \) be a sequence of i.i.d. \(\mathscr{P}(\lambda)\) distributed random variables (\(\lambda>0\)), \(w_{ni}=\frac{1}{\sigma_{n}}\), \(1\leq i\leq n\), and \(\sigma_{n}^{2}=\sum_{i=1}^{n} \operatorname{Var}(Z_{i})\). Then, we have \(X_{n}=\frac{1}{\sigma_{n}}\sum_{i=1}^{n}Z_{i}\), \(n\geq1\). For \(\lambda=1\), \(n=10,20,30,\ldots,100\), \(a=0.1,1\), and \(\alpha=1,2\), we repeat the experiments 100,000 times and compute the ‘ratio’ \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\); see Figure 1.

Figure 1
figure1

Inverse moment for Poisson distribution.

Similarly, let \(\{Z_{n},n\geq1\}\) be a sequence of i.i.d. \(\chi^{2}(1)\)-distributed random variables, and \(w_{ni}\equiv1\), \(1\leq i\leq n\). Then, we have \(X_{n}=\sum_{i=1}^{n}Z_{i}\), \(n\geq1\). For \(n=10,20,30,\ldots,100\), \(a=0.1,1\), and \(\alpha=1,2\), the experiments are repeated by 100,000 times, the ‘ratio’ \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\) is computed; see Figure 2.

Figure 2
figure2

Inverse moment for Chi-square distribution.

Likewise, let \(\{Z_{n},n\geq1\}\) be a sequence of i.i.d. binomial random variables, and \(w_{ni}=\frac{i}{n}\), \(1\leq i\leq n\). Then, we have \(X_{n}=\sum_{i=1}^{n}\frac{i}{n}Z_{i}\), \(n\geq1\). For \(n=10,20,30,\ldots,100\), \(a=0.1,1\), and \(\alpha=1,2\), the experiments are repeated by 100,000 times, and the ‘ratio’ \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\) is computed; see Figure 3 and Figure 4.

Figure 3
figure3

Inverse moment for Binomial distribution.

Figure 4
figure4

Inverse moment for Binomial distribution.

In Figures 1-4, the label of y-axis ‘ratio’ is defined as \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\), and the label of x-axis ‘sample sizes’ is the number of a sample. By Figures 1-4 we find that the ‘ratio’ ≥1. In fact, by the Jensen inequality we can obtain that \(E(a+X_{n})^{-\alpha}\geq (a+EX_{n})^{-\alpha}\) for all \(a>0\) and \(\alpha>0\). Meanwhile, with different a and α, the ‘ratio’ deceases to 1 as the sample n increases. So the results of Figures 1-4 coincide with Theorems 2.1 and 2.2.

Proofs of main results

Lemma 3.1

(Wang et al. [23], Proposition 1.1)

Let \(\{Z_{n},n\geq1\}\) be WUOD (WLOD) with dominating coefficients \(g_{u}(n)\), \(n\geq1 \) (\(g_{l}(n)\), \(n\geq1\)). If \(\{f_{n}(\cdot),n\geq1\}\) are nondecreasing, then \(\{f_{n}(Z_{n}),n\geq1\}\) are still WUOD (WLOD) with dominating coefficients \(g_{u}(n)\), \(n\geq1\) (\(g_{l}(n)\), \(n\geq1\)); if \(\{f_{n}(\cdot),n\geq1\}\) are nonincreasing, then \(\{f_{n}(Z_{n}),n\geq1\}\) are WLOD (WUOD) with dominating coefficients \(g_{l}(n)\), \(n\geq1\) (\(g_{u}(n)\), \(n\geq1\)).

Lemma 3.2

(Wang et al. [27], Corollary 2.3)

Let \(q\geq2\), and let \(\{Z_{n},n\geq1\}\) be a mean-zero sequence of WOD random variables with dominating coefficients \(g(n)=\max\{g_{u}(n), g_{l}(n)\}\) and \(E|Z_{n}|^{q} <\infty\) for all \(n\geq1\). Then, for all \(n\geq1\), there exist positive constants \(C_{1}(q)\) and \(C_{2}(q)\) depending only on q such that

$$ E \Biggl|\sum_{i=1}^{n}Z_{i} \Biggr|^{q}\leq C_{1}(q)\sum_{i=1}^{n}E|Z_{i}|^{q}+C_{2}(q)g(n) \Biggl(\sum_{i=1}^{n}EZ_{i}^{2} \Biggr)^{q/2}. $$

Proof of Theorem 2.1

Let \(a>0\) and \(\alpha>0\). Since \(f(x)=(a+x)^{-\alpha}\) is a convex function for \(x\geq0\), applying Jensen’s inequality, we obtain

$$E(a+X_{n})^{-\alpha}\geq (a+EX_{n})^{-\alpha}. $$

Thus,

$$ \liminf_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}E(a+X_{n})^{-\alpha} \bigr\} \geq 1. $$
(3.1)

It suffices to show that

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}E(a+X_{n})^{-\alpha} \bigr\} \leq 1. $$
(3.2)

So, it only needs to show that, for all \(\delta\in(0,1)\),

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}E(a+X_{n})^{-\alpha} \bigr\} \leq (1-\delta)^{-\alpha}. $$
(3.3)

Obviously, it follows from (A.4) that

$$\lim_{n\rightarrow\infty} \Biggl\{ \sum_{i=1}^{n}w_{ni}E \bigl[Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]\bigg/\sum_{i=1}^{n} w_{ni}EZ_{i} \Biggr\} =0, $$

which yields that there exists \(n(\delta)>0\) such that

$$ \sum_{i=1}^{n} w_{ni}E \bigl[Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]\leq \frac{\delta}{4}\sum_{i=1}^{n}w_{ni}EZ_{i},\quad n\geq n(\delta). $$
(3.4)

Decompose \(E(a+X_{n})^{-\alpha}\) as

$$ E(a+X_{n})^{-\alpha}:=Q_{1}+Q_{2}, $$
(3.5)

where

$$\begin{aligned}& Q_{1}=E \bigl[(a+X_{n})^{-\alpha}I(U_{n}\leq \mu_{n}-\delta\mu_{n}) \bigr],\qquad Q_{2}=E \bigl[(a+X_{n})^{-\alpha}I(U_{n}> \mu_{n}-\delta \mu_{n}) \bigr], \\& U_{n}=\sum_{i=1}^{n}w_{ni} \bigl[Z_{i}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr)+\mu_{n}^{s}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]. \end{aligned}$$

Since \(X_{n}\geq U_{n}\), we have

$$Q_{2}\leq E \bigl[(a+X_{n})^{-\alpha}I(X_{n}> \mu_{n}-\delta\mu_{n}) \bigr]\leq (a+\mu_{n}-\delta \mu_{n})^{-\alpha}, $$

which by condition (A.3) implies that

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}Q_{2} \bigr\} \leq\limsup_{n\rightarrow\infty} \bigl\{ (a+\mu_{n})^{\alpha}(a+ \mu_{n}-\delta\mu_{n})^{-\alpha} \bigr\} =(1- \delta)^{-\alpha }. $$
(3.6)

We get by (3.4) that, for all \(n\geq n(\delta)\),

$$|\mu_{n}-EU_{n}| \leq2\sum_{i=1}^{n} w_{ni}E \bigl[Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]\leq\delta\mu_{n}/2. $$

Denote \(Z_{ni}=w_{ni}[Z_{i} I(Z_{i}\leq\mu_{n}^{s})+ \mu_{n}^{s}I(Z_{i}>\mu_{n}^{s})]\), \(1\leq i\leq n\). So, \(\{Z_{ni}-EZ_{ni},1\leq i\leq n\}\) are also mean-zero WOD random variables with dominating coefficients \(g(n)\) by Lemma 3.1. Thus, by the Markov inequality, Lemma 3.2, and \(C_{r}\) inequality, we obtain that, for all \(q>2\) and \(n\geq n(\delta)\),

$$\begin{aligned} Q_{1} =&E \bigl[(a+X_{n})^{-\alpha}I(U_{n} \leq \mu_{n}-\delta\mu_{n}) \bigr] \\ \leq& a^{-\alpha}P(U_{n}\leq\mu_{n}-\delta \mu_{n}) \\ \leq& a^{-\alpha}P\bigl(|EU_{n}-U_{n}|\geq\delta \mu_{n}/2\bigr) \\ \leq&\frac{C_{1q}2^{q}}{\delta^{q}}\mu_{n}^{-q} \Biggl\{ \sum _{i=1}^{n}E|Z_{ni}|^{q}+g(n) \Biggl(\sum_{i=1}^{n} \operatorname{Var}(Z_{ni}) \Biggr)^{q/2} \Biggr\} \\ \leq&\frac{C_{2q}}{\delta^{q}}\mu_{n}^{-q} \Biggl\{ \sum _{i=1}^{n}w_{ni}^{q} \bigl[E \bigl(Z_{i}^{q}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+\mu_{n}^{sq}EI \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr] \Biggr\} \\ &{}+\frac{C_{3q}}{\delta^{q}}\mu_{n}^{-q}g(n) \Biggl\{ \sum _{i=1}^{n}w_{ni}^{2} \bigl[E \bigl(Z_{i}^{2}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+\mu_{n}^{2s}EI \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr] \Biggr\} ^{q/2} \\ \leq&\frac{C_{2q}(\max_{1\leq i\leq n}w_{ni})^{q-1}}{\delta^{q}}\mu_{n}^{-q} \Biggl\{ \mu_{n}^{s(q-1)}\sum_{i=1}^{n}w_{ni} \bigl[E \bigl(Z_{i}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+E \bigl(Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr) \bigr] \Biggr\} \\ &{}+\frac{C_{3q}(\max_{1\leq i\leq n}w_{ni})^{q/2}}{\delta^{q}}\mu_{n}^{-q}g(n) \\ &{}\times \Biggl\{ \mu_{n}^{s}\sum_{i=1}^{n}w_{ni} \bigl[E \bigl(Z_{i}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+E \bigl(Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr) \bigr] \Biggr\} ^{q/2} \\ :=&I_{n1}+I_{n2}. \end{aligned}$$
(3.7)

Combining conditions (A.1)-(A.4) with (3.7), we establish

$$ I_{n1}+I_{n2}\leq\frac{C_{4q}}{\delta^{q}}\mu_{n}^{-q} \bigl[\mu_{n}^{s(q-1)}\mu _{n}+\mu_{n}^{\beta} \bigl(\mu_{n}^{s}\mu_{n} \bigr)^{q/2} \bigr] =\frac{C_{4q}}{\delta^{q}} \bigl[\mu_{n}^{-(q-1)(1-s)}+ \mu_{n}^{\beta-\frac {q}{2}(1-s)} \bigr]. $$
(3.8)

Since \(q>2\), we have \(q-1>\frac{q}{2}\). Therefore, we take \(q>\max\{2,2(\alpha+\beta)/(1-s)\}\) in (3.8) and obtain

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}Q_{1} \bigr\} \leq \limsup_{n\rightarrow\infty} \biggl\{ (a+\mu_{n})^{\alpha} \frac{C_{5q}}{\delta ^{q}} \bigl[\mu_{n}^{-(q-1)(1-s)} +\mu_{n}^{\beta-\frac{q}{2}(1-s)} \bigr] \biggr\} =0. $$
(3.9)

Thus, by (3.1)-(3.3), (3.5), (3.6), and (3.9) the proof of (1.1) is completed. □

Proof of Theorem 2.2

By the Taylor series expansion at \(EX_{n}\), we have that

$$ \frac{1}{(a+X_{n})^{\alpha}}=\frac{1}{(a+EX_{n})^{\alpha}}-\frac{\alpha (X_{n}-EX_{n})}{(a+EX_{n})^{\alpha+1}} +\frac{\alpha(\alpha+1)}{2} \frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha +2}}, $$

where \(\xi_{n}\) lies between \(X_{n}\) and \(\mu_{n}\). Taking the expectation, we obtain

$$ E \biggl(\frac{1}{(a+X_{n})^{\alpha}} \biggr) =\frac{1}{(a+EX_{n})^{\alpha}} +\frac{\alpha(\alpha+1)}{2}E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha +2}} \biggr). $$
(3.10)

We need to show that

$$ E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}} \biggr)=O \biggl(\frac {1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr), $$
(3.11)

where \(\beta\geq0\), \(2\beta/r<1\), and \(r>2\). Obviously, we can decompose it so that

$$ E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}} \biggr) =E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}> \mu_{n}) \biggr) +E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}\leq \mu_{n}) \biggr). $$
(3.12)

For some \(r>2\), we can argue by Lemma 3.2 and conditions (2.1) and (A.2) that

$$\begin{aligned} &E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}>\mu_{n}) \biggr) \\ &\quad\leq \frac{1}{(a+\mu_{n})^{\alpha+2}}E(X_{n}-EX_{n})^{2} \\ &\quad\leq\frac{1}{(a+\mu_{n})^{\alpha+2}} \bigl(E|X_{n}-EX_{n}|^{r} \bigr)^{2/r} = \frac{1}{(a+\mu_{n})^{\alpha+2}} \Biggl(E \Biggl|\sum _{i=1}^{n} w_{ni}(Z_{i}-EZ_{i}) \Biggr|^{r} \Biggr)^{2/r} \\ &\quad\leq\frac{C_{1}}{(a+\mu_{n})^{\alpha+2}} \Biggl\{ \sum_{i=1}^{n} w_{ni}^{r}E|Z_{i}-EZ_{i}|^{r}+g(n) \Biggl(\sum_{i=1}^{n}w_{ni}^{2} \operatorname{Var}(Z_{i}) \Biggr)^{r/2} \Biggr\} ^{2/r} \\ &\quad\leq\frac{C_{1}(\max_{1\leq i\leq n}w_{ni})^{2}}{(a+\mu_{n})^{\alpha+2}} \Biggl\{ \sum_{i=1}^{n} E|Z_{i}-EZ_{i}|^{r}+g(n) \Biggl(\sum _{i=1}^{n}\operatorname{Var}(Z_{i}) \Biggr)^{r/2} \Biggr\} ^{2/r} \\ &\quad\leq C_{2} \biggl(\frac{(EX_{n})^{1+2\beta/r}}{(a+EX_{n})^{\alpha+2}} \biggr)=O \biggl( \frac{1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr), \end{aligned}$$
(3.13)

where \(2\beta/r<1\).

Meanwhile, for some \(r>2\), applying the Hölder inequality and Theorem 2.1, we have that

$$\begin{aligned} &E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}\leq \mu_{n}) \biggr) \\ &\quad\leq E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+X_{n})^{\alpha+2}} \biggr) \\ &\quad\leq \bigl[E|X_{n}-EX_{n}|^{r} \bigr]^{2/r} \bigl[E(a+X_{n})^{\frac{(-\alpha -2)r}{r-2}} \bigr]^{\frac{r-2}{r}} \\ &\quad\leq C_{1} \Biggl(E \Biggl|\sum_{i=1}^{n}w_{ni} (Z_{i}-EZ_{i}) \Biggr|^{r} \Biggr)^{2/r} \bigl[(a+EX_{n})^{\frac{(-\alpha-2)r}{r-2}} \bigr]^{\frac {r-2}{r}} \\ &\quad=O \biggl(\frac{(EX_{n})^{1+2\beta/r}}{(a+EX_{n})^{\alpha+2}} \biggr) =O \biggl(\frac{1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr), \end{aligned}$$
(3.14)

where \(2\beta/r<1\).

Consequently, (3.11) follows from (3.12)-(3.14). Combining (3.10) with (3.11), we establish the result of (2.2) with \(2\beta/r<1\).

It is time to prove (2.3). We decompose

$$ E \biggl(\frac{X_{n}}{(a+X_{n})^{\alpha}} \biggr)=E \biggl(\frac{1}{(a+X_{n})^{\alpha -1}} \biggr)-aE \biggl(\frac{1}{(a+X_{n})^{\alpha}} \biggr). $$
(3.15)

For \(\alpha>1\), by (3.10) and (3.11) we have

$$ E \biggl(\frac{1}{(a+X_{n})^{\alpha-1}} \biggr)=\frac{1}{(a+EX_{n})^{\alpha -1}}+O \biggl( \frac{1}{(a+EX_{n})^{\alpha-2\beta/r}} \biggr). $$
(3.16)

Similarly, it follows from (3.10) and (3.11) that

$$ E \biggl(\frac{1}{(a+X_{n})^{\alpha}} \biggr)=\frac{1}{(a+EX_{n})^{\alpha}} +O \biggl( \frac{1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr). $$
(3.17)

Consequently, by (3.15)-(3.17) we have

$$\begin{aligned} E \biggl(\frac{X_{n}}{(a+X_{n})^{\alpha}} \biggr) =&\frac{1}{(a+EX_{n})^{\alpha -1}}+O \biggl( \frac{1}{(a+EX_{n})^{\alpha-2\beta/r}} \biggr) \\ &{} - \biggl\{ \frac{a}{(a+EX_{n})^{\alpha}} +O \biggl(\frac{a}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr) \biggr\} \\ =&\frac{EX_{n}}{(a+EX_{n})^{\alpha}}+O \biggl(\frac{1}{(a+EX_{n})^{\alpha-2\beta /r}} \biggr). \end{aligned}$$
(3.18)

Therefore, (2.3) immediately follows from (3.18). □

References

  1. 1.

    Chao, MT, Strawderman, WE: Negative moments of positive random variables. J. Am. Stat. Assoc. 67(338), 429-431 (1972)

    Article  MATH  Google Scholar 

  2. 2.

    Pittenger, AO: Sharp mean-variance bounds for Jensen-type inequalities. Stat. Probab. Lett. 10(2), 91-94 (1990)

    MathSciNet  Article  MATH  Google Scholar 

  3. 3.

    Cribari-Neto, F, Garcia, NL, Vasconcellos, KLP: A note on inverse moments of binomial variates. Rev. Econom. 20(2), 269-277 (2000)

    Google Scholar 

  4. 4.

    Fujioka, T: Asymptotic approximations of the inverse moment of the noncentral chi-squared variable. J. Japan Statist. Soc. 31(1), 99-109 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Hsu, DA: Detecting shifts of parameter in gamma sequences with applications to stock price and air traffic flow analysis. J. Am. Stat. Assoc. 74(365), 31-40 (1979)

    Article  Google Scholar 

  6. 6.

    Inclán, C, Tiao, GC: Use of cumulative sums of squares for retrospective detection of changes of variance. J. Am. Stat. Assoc. 89(427), 913-923 (1994)

    MathSciNet  MATH  Google Scholar 

  7. 7.

    Garcia, NL, Palacios, JL: On inverse moments of nonnegative random variables. Stat. Probab. Lett. 53(3), 235-239 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Kaluszka, M, Okolewski, A: On Fatou-type lemma for monotone moments of weakly convergent random variables. Stat. Probab. Lett. 66(1), 45-50 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Hu, SH, Chen, GJ, Wang, XJ, Chen, EB: On inverse moments of nonnegative weakly convergent random variables. Acta Math. Appl. Sin. 30(2), 361-367 (2007)

    MathSciNet  MATH  Google Scholar 

  10. 10.

    Wu, TJ, Shi, XP, Miao, BQ: Asymptotic approximation of inverse moments of nonnegative random variables. Stat. Probab. Lett. 79(11), 1366-1371 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Wang, XJ, Hu, SH, Yang, WZ, Ling, NX: Exponential inequalities and inverse moment for NOD sequence. Stat. Probab. Lett. 80(5-6), 452-461 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Shen, AT: A note on the inverse moments for nonnegative ρ-mixing random variables. Discrete Dyn. Nat. Soc. 2011, Article ID 185160 (2011)

    MathSciNet  MATH  Google Scholar 

  13. 13.

    Sung, SH: On inverse moments for a class of nonnegative random variables. J. Inequal. Appl. 2010, Article ID 823767 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Xu, M, Chen, PY: On inverse moments for nonnegative NOD sequence. Acta Math. Sin. 55(2), 201-206 (2013)

    Google Scholar 

  15. 15.

    Hu, SH, Wang, XH, Yang, WZ, Wang, XJ: A note on the inverse moment for the nonnegative random variables. Commun. Stat., Theory Methods 43(8), 1750-1757 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Shen, AT: On asymptotic approximation of inverse moments for a class of nonnegative random variables. Statistics 48(6), 1371-1379 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  17. 17.

    Shi, XP, Wu, YH, Liu, Y: A note on asymptotic approximations of inverse moments of nonnegative random variables. Stat. Probab. Lett. 80(15-16), 1260-1264 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  18. 18.

    Horng, WJ, Chen, PY, Hu, T-C: On approximation for inverse moments of nonnegative random variables. J. Math. Stat. Oper. Res. (JMSOR) 1(1), 38-42 (2012)

    Google Scholar 

  19. 19.

    Yang, WZ, Hu, SH, Wang, XJ: On the asymptotic approximation of inverse moment for nonnegative random variables. Commun. Stat., Theory Methods (2014). Accepted author version posted online: 08 Oct 2014. http://www.tandfonline.com/doi/abs/10.1080/03610926.2013.781648

  20. 20.

    Shi, XP, Reid, N, Wu, YH: Approximation to the moments of ratios of cumulative sums. Can. J. Stat. 42(2), 325-336 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  21. 21.

    Wang, YB, Cheng, DY: Basic renewal theorems for random walks with widely dependent increments. J. Math. Anal. Appl. 384(2), 597-606 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  22. 22.

    Liu, XJ, Gao, QW, Wang, YB: A note on a dependent risk model with constant interest rate. Stat. Probab. Lett. 82(4), 707-712 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  23. 23.

    Wang, YB, Cui, ZL, Wang, KY, Ma, XL: Uniform asymptotics of the finite-time ruin probability for all times. J. Math. Anal. Appl. 390(1), 208-223 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    He, W, Cheng, DY, Wang, YB: Asymptotic lower bounds of precise large deviations with nonnegative and dependent random variables. Stat. Probab. Lett. 83(1), 331-338 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  25. 25.

    Wang, KY, Wang, YB, Gao, QW: Uniform asymptotics of the finite-time ruin probability of dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15(1), 109-124 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Shen, AT: Bernstein-type inequality for widely dependent sequence and its application to nonparametric regression models. Abstr. Appl. Anal. 2013, Article ID 862602 (2013)

    MathSciNet  Google Scholar 

  27. 27.

    Wang, XJ, Xu, C, Hu, T-C, Volodin, AI, Hu, SH: On complete convergence for widely orthant-dependent random variables and its applications in nonparametric regression models. Test 23(3), 607-629 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  28. 28.

    Qiu, DH, Chen, PY: Complete and complete moment convergence for weighted sums of widely orthant dependent random variables. Acta Math. Sin. Engl. Ser. 30(9), 1539-1548 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  29. 29.

    Yang, WZ, Liu, TT, Wang, XJ, Hu, SH: On the Bahadur representation of sample quantiles for widely orthant dependent sequences. Filomat 28(7), 1333-1343 (2014)

    MathSciNet  Article  Google Scholar 

  30. 30.

    Wang, XJ, Hu, SH: The consistency of the nearest neighbor estimator of the density function based on WOD samples. J. Math. Anal. Appl. 429(1), 497-512 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Lehmann, EL: Some concepts of dependence. Ann. Math. Stat. 37(5), 1137-1153 (1966)

    MathSciNet  Article  MATH  Google Scholar 

  32. 32.

    Joag-Dev, K, Proschan, F: Negative association of random variables with applications. Ann. Stat. 11(1), 286-295 (1983)

    MathSciNet  Article  MATH  Google Scholar 

  33. 33.

    Liu, L: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79(9), 1290-1298 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  34. 34.

    Liu, L: Necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails. Sci. China Math. 53(6), 1421-1434 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  35. 35.

    Wang, XJ, Hu, T-C, Volodin, AI, Hu, SH: Complete convergence for weighted sums and arrays of rowwise extended negatively dependent random variables. Commun. Stat., Theory Methods 42(13), 2391-2401 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  36. 36.

    Wang, XJ, Wang, SJ, Hu, SH, Ling, JM, Wei, YF: On complete convergence for weighted sums of rowwise extended negatively dependent random variables. Stoch. Int. J. Probab. Stoch. Process. 85(6), 1060-1072 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  37. 37.

    Hu, T-C, Rosalsky, A, Wang, KL: Complete convergence theorems for extended negatively dependent random variables. Sankhya, Ser. A 77, 1-29 (2015)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors are deeply grateful to editors and the anonymous referees for their careful reading and insightful comments, which helped in improving the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (11171001, 11426032, 11501005), Natural Science Foundation of Anhui Province (1408085QA02, 1508085QA01, 1508085J06, 1608085QA02), Provincial Natural Science Research Project of Anhui Colleges (KJ2014A010, KJ2014A020, KJ2015A065), Higher Education Talent Revitalization Project of Anhui Province (2013SQRL005ZD), Quality Engineering Project of Anhui Province (2015jyxm054), Applied Teaching Model Curriculum of Anhui University (XJYYKC1401,ZLTS2015053), and Doctoral Research Start-up Funds Projects of Anhui University.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Wenzhi Yang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, X., Liu, X., Yang, W. et al. The inverse moment for widely orthant dependent random variables. J Inequal Appl 2016, 161 (2016). https://doi.org/10.1186/s13660-016-1099-8

Download citation

MSC

  • 60E15
  • 62E20

Keywords

  • inverse moment model
  • WOD random variables
  • convergence rate
  • triangular array