# On the exponential inequality for acceptable random variables

## Abstract

In this paper, we obtain some new exponential inequalities for partial sums and their finite maximum of acceptable random variables by the results of Sung et al. (J. Korean Stat. Soc., 40, 109-114, 2011) and in different ways from theirs. The inequalities we obtained improve the existing corresponding results and, in some sense, are optimal. In addition, we introduce some concepts and examples of widely acceptable random variables to extend our results mentioned above.

60F15, 62G20

## 1 Introduction

It is well known that the exponential inequality for the random variables is very useful in several probabilistic derivations. Recently, Sung et al. [1] obtained an exponential inequality for identically distributed and acceptable random variables, and their result improved the corresponding ones of Kim and Kim [2], Nooghabi and Azarnoosh [3], Sung [4], Xing [5], Xing et al. [6], and Xing and Yang [7].

Let {X i : i ≥ 1} be a sequence of random variables defined on a fixed probability space (Ω, F, P). We say that {X i : i ≥ 1} are acceptable if there exists δ > 0, such that for any real λ satisfying | λ | ≤ δ,

$E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda \sum _{i=1}^{n}{X}_{i}\right\}\le \prod _{i=1}^{n}E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda {X}_{i}\right\},\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}n\ge 1.$
(1.1)

The concept of acceptable random variables was firstly proposed by Giuliano Antonini et al. [8], but the inequality (1.1) is required to hold for all λ {-∞, ∞}. Sung et al. [1] then introduced a weaker definition as above. This acceptable structure can reflect not only some common negative dependence structures (see [9, 10], and so on) but also some other dependent structures. We will also extend the concept above in Section 4.

The main results of Sung et al. [1] are the following.

Theorem 1.A Let {X i : i ≥ 1} be a sequence of identically distributed and acceptable random variables with E exp{δ | X1|} < ∞ for some δ > 0, then for any 0 < ε,

$P\left(\left|\sum _{i=1}^{n}\left({X}_{i}-E{X}_{i}\right)\right|>n\epsilon \right)\phantom{\rule{0.3em}{0ex}}\le \phantom{\rule{0.3em}{0ex}}2\phantom{\rule{0.3em}{0ex}}exp\left\{-\frac{n{\epsilon }^{2}}{4K}\right\},$
(1.2)

where $K=2{\left(E|{X}_{1}\right)}^{\frac{1}{2}}E\mathrm{exp}\left\{\delta |{X}_{1}\right\}$.

Inspired by the above theorem, we present the following three problems.

Problem 1.1 Sung et al. [1] show that the upper bound of Theorem 1.A is less than those of Kim and Kim [2], Nooghabi and Azarnoosch [3], Sung [4], Xing [5], Xing et al. [6], and Xing and Yang [7], but they did not illustrate their upper bound is optimal in some sense. Hence, we wonder whether there exists a upper bound, which is optimal in some sense.

Problem 1.2 It is well known that the exponential inequality of the finite maximum of partial sum ${max}_{1\le k\le n}{\sum }_{i=1}^{k}\left({X}_{i}-E{X}_{i}\right)$ is more valuable than that of partial sum ${\sum }_{i=1}^{n}\left({X}_{i}-E{X}_{i}\right)$ in many fields. Thus, we wonder whether there is a exponential inequality of ${max}_{1\le k\le n}{\sum }_{i=1}^{k}\left({X}_{i}-E{X}_{i}\right)$, which is optimal in some sense.

Problem 1.3 For much weaker random variables than acceptable random variables, we wonder whether there are also some results similar to that of acceptable random variables.

This paper is organized as follows: in Section 2, we will state our main results, which answer Problems 1.1 and 1.2 above positively; in Section 3, we will prove our results; and in Section 4, we will discuss Problem 1.3.

## 2 Main results

For the sake of simplicity, we only prove the results of one-sided inequality, that is, because we can achieve the corresponding results of two-sided inequality by using the standard method, it is not to go into details. Firstly, we introduce some notions, notations, and some preparing results. It can be seen from the following paper that the methods we used are different from that of the references mentioned above.

For a random variable X, we write δ0 = sup {λ ≥ 0: E exp {λ (X - EX)} < ∞}, Obviously, 0 ≤ δ0 ≤ ∞. Let {a i : i ≥ 1} be a sequence of positive numbers such that a n ↑ ∞ as n → ∞. If δ0 > 0, then for any fixed n ≥ 2, 1 ≤ kn and 0 < λ < δ0, write

${f}_{k}\left(\lambda \right)={f}_{k}\left(\lambda ,n\right)\equiv \lambda -\frac{k}{{a}_{n}}log\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda \left(X-EX\right)\right\}.$
(2.1)

We now propose a proposition that plays a key role for the main results of this paper.

Proposition 2.1. Let X be non-degenerated random variable with δ0 > 0. Then, for any fixed n ≥ 2, 1 ≤ kn, there exists a unique finite constant 0 < λ k0 = λ k0 (n) ≤ δ0, such that

${f}_{k}\left({\lambda }_{k0}\right)=\underset{\lambda \in \left[0,{\delta }_{0}\right]}{max}{f}_{k}\left(\lambda \right)>0.$
(2.2)

Furthermore, we have

${\lambda }_{k0}=min\left\{{\delta }_{0},{\lambda }_{k1}\right\},$
(2.3)

where λk1is the solution of the equation

$E\left(X-EX-\frac{{a}_{n}}{k}\right)exp\left\{\lambda \left(X-EX\right)\right\}=0;$

if λk1does not exist, define λk1 = ∞, then δ0 < ∞ and E exp {δ0X} < ∞. Finally, we have

Remark 2.1. Since λk1is also the solution of the Petrov equation

$\begin{array}{c}{h}_{k}\left(\lambda \right)={h}_{k}\left(\lambda ,n\right)\equiv \frac{\mathsf{\text{d}}}{\mathsf{\text{d}}\lambda }Eexp\left\{\lambda \left(X-EX-\frac{{a}_{n}}{k}\right)\right\}\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}=E\left(X-EX-\frac{{a}_{n}}{k}\right)exp\left\{\lambda \left(X-EX-\frac{{a}_{n}}{k}\right)\right\}=0,\end{array}$

so, we call λk1is the Petrov-exponent of$X-EX-\frac{{a}_{n}}{k}$for 1 ≤ kn and n ≥ 2.

According to the above proposition, we obtain our first result for the partial sums ${\sum }_{i=1}^{n}\left({X}_{i}-E{X}_{i}\right)$ for each fixed n ≥ 2, as Theorem 1.A.

Theorem 2.1. Let {X, X i : i ≥ 1} be a sequence of identically distributed, non-degenerated, and acceptable random variables for δ0 > 0, that is, (1.1) holds for any 0 ≤ λ ≤ δ0. Assume that {a i : i ≥ 1} is a sequence of positive real numbers such that a n ↑ ∞ as n → ∞. Then there exists a unique finite positive constant λ k0 , which satisfies (2.2) and (2.3), and for each fixed n ≥ 2 and 1 ≤ kn,

$P\left(\sum _{i=1}^{k}\left({X}_{i}-E{X}_{i}\right)>{a}_{n}\right)\le exp\left\{-{a}_{n}{f}_{k}\left({\lambda }_{k0}\right)\right\}$
(2.4)

and

$\begin{array}{c}\mathrm{exp}\left\{-{a}_{n}{f}_{k}\left({\lambda }_{k0}\right)\right\}=\underset{\lambda \in \left[0,{\delta }_{0}\right]}{\mathrm{min}}\mathrm{exp}\left\{-{a}_{n}{f}_{k}\left(\lambda \right)\right\}\\ =\underset{\lambda \in \left[0,{\delta }_{0}\right]}{\mathrm{min}}\mathrm{exp}\left\{-\lambda {a}_{n}\right\}{\left(E\mathrm{exp}\left\{\lambda \left(X-EX\right)\right\}\right)}^{n}.\end{array}$
(2.5)

Remark 2.2. Especially, if we take a n = nε for any ε > 0 and k = n, then (2.4) will change into

$P\left(\sum _{i=1}^{n}\left({X}_{i}-E{X}_{i}\right)>n\epsilon \right)\le exp\left\{-n\left({\lambda }_{n0}\epsilon -log\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}E\phantom{\rule{0.3em}{0ex}}exp\left\{{\lambda }_{n0}\left(X-EX\right)\right\}\right)\right\},$
(2.6)

where λn 0is respective of ε. We remark that our results remove the condition εKδ, which is required in Theorem 1.A.

Furthermore, we give two propositions below to state the meanings of Theorems 2.1 and 2.2, respectively.

Proposition 2.2. Under the conditions of Theorem 1.A, we have ${\lambda }_{n0}\ne \frac{\epsilon }{2K}$, and then for each n≥ 2,

$exp\left\{-n\left({\lambda }_{n0}\epsilon -log\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}Eexp\left\{{\lambda }_{n0}\left(X-EX\right)\right\}\right)\right\}
(2.7)

Proposition 2.3. Let X be random variable with positive δ0and define a function g(λ) ≡ E exp {λ(X - EX)}. Then g is a strictly increasing function and g (λ) > 1 for all λ > 0.

Subsequently, we get an exponential inequality for ${max}_{1\le k\le n}{\sum }_{i=1}^{k}\left({X}_{i}-E{X}_{i}\right)$.

Theorem 2.2. Let the conditions of Theorem 2.1 be true, then for each fixed n ≥ 2, there exists a positive constant λ0, such that λ n0 λ0λ10,

$P\left(\underset{1\le k\le n}{max}\sum _{i=1}^{k}\left({X}_{i}-E{X}_{i}\right)>{a}_{n}\right)\le {b}_{n}\left({\lambda }_{0}\right)exp\left\{-{a}_{n}{f}_{n}\left({\lambda }_{0}\right)\right\}$
(2.8)

and

$\begin{array}{c}{b}_{n}\left({\lambda }_{0}\right)exp\left\{-{a}_{n}{f}_{n}\left({\lambda }_{0}\right)\right\}=\underset{\lambda \in \left[0,{\delta }_{0}\right]}{min}{b}_{n}\left(\lambda \right)exp\left\{-{a}_{n}{f}_{n}\left(\lambda \right)\right\}\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}=\underset{\lambda \in \left[0,{\delta }_{0}\right]}{min}exp\left\{-\lambda {a}_{n}\right\}\sum _{k=1}^{n}{\left(E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda \left(X-EX\right)\right\}\right)}^{k},\end{array}$
(2.9)

where

${b}_{n}\left({\lambda }_{0}\right)\equiv \frac{{\left(E\mathrm{exp}\left\{{\lambda }_{0}\left(X-EX\right)\right\}\right)}^{n+1}-E\mathrm{exp}\left\{{\lambda }_{0}\left(X-EX\right)\right\}}{\left(E\mathrm{exp}\left\{{\lambda }_{0}\left(X-EX\right)\right\}-1\right){\left(E\mathrm{exp}\left\{{\lambda }_{0}\left(X-EX\right)\right\}\right)}^{n}}.$

Remark 2.3. By Proposition 2.3, it follows that

$0<{b}_{n}\left({\lambda }_{0}\right)\le \frac{E\phantom{\rule{0.3em}{0ex}}exp\left\{{\lambda }_{0}\left(X-EX\right)\right\}}{E\phantom{\rule{0.3em}{0ex}}exp\left\{{\lambda }_{0}\left(X-EX\right)\right\}-1}<\infty ,$

where the right expression can be irrespective of n.

## 3 Proofs of theorems and propositions

Proof of Proposition 2.1. For convenience, we set Y = X - EX, Y i = X i - EX i , and 1 ≤ in. For 0 ≤ λ < δ0 and 1 ≤ kn, by the definition of δ0 and the non-degeneration of Y, it is clear that f k (λ) (see (2.1)) has arbitrary order continues derivatives, f k (0) = 0,

${f}_{k}^{\prime }\left(\lambda \right)=1-\frac{k}{{a}_{n}}\frac{EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}}{E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}},\phantom{\rule{1em}{0ex}}{f}_{k}^{\prime }\left(0\right)=1>0,$

and

$\begin{array}{c}{f}_{k}^{{}^{″}}\left(\lambda \right)=-\frac{k}{{a}_{n}}\frac{E{Y}^{2}\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}-{\left(EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}\right)}^{2}}{{\left(E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}\right)}^{2}},\\ {f}_{k}^{{}^{″}}\left(0\right)=-\frac{k}{{a}_{n}}E{Y}^{2}<0.\end{array}$

By Cauchy inequality and the non-degeneration of Y, we get

$\begin{array}{c}{\left(EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}\right)}^{2}={\left(EY\phantom{\rule{0.3em}{0ex}}exp\left\{\frac{1}{2}\lambda Y\right\}exp\left\{\frac{1}{2}\lambda Y\right\}\right)}^{2}\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}

which derives ${f}_{k}^{{}^{″}}\left(\lambda \right)<0$.

We can get from the above conclusions that ${f}_{k}^{\prime }\left(\lambda \right)$ is strictly decreasing in [0, δ0).

Next, we will divide two cases to discuss below.

Case 1: 0 < λ k1 < ∞, which means that the equation ${f}_{k}^{\prime }\left(\lambda \right)=0$ has a finite solution λ k1 . Clearly, λ k1 is unique and

${f}_{k}^{\prime }\left(\lambda \right)>0$

for 0 ≤ λλ k1 , and ${f}_{k}^{\prime }\left(\lambda \right)<0$ for λ k1 < λδ0 or λ k1 = δ0.

Taking λ k0 = λ k1 , obviously (2.2) holds, that is,

${f}_{k}\left({\lambda }_{k0}\right)=\underset{\lambda \in \left[0,{\delta }_{0}\right]}{max}{f}_{k}\left(\lambda \right)>0.$

Case 2: λ k1 = ∞, which means that the equation ${f}_{k}^{\prime }\left(\lambda \right)=0$ does not have finite solutions. Then f k (λ) strictly increases from 0 to f k (δ0) > 0. By λ k1 = ∞, h k (0) < 0, and h k (∞) = ∞, we have δ0 < ∞. Further, we have E exp {δ0X} < ∞, or else f k (δ0) = - ∞ < 0. Now we take λ k 0 = δ0, it is obvious that (2.2) still holds.

Finally, we write $s\left(\lambda \right)=\frac{EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}}{E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}}$ on [0, δ0], then it is easy to find that

$s\left(0\right)=0,\phantom{\rule{1em}{0ex}}{s}^{\prime }\left(\lambda \right)=\frac{E{Y}^{2}\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}-{\left(EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}\right)}^{2}}{{\left(EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}\right)}^{2}}>0.$

Thus, s is a non-negative and strictly increasing function. So, from the identity ${f}_{k}^{\prime }\left({\lambda }_{k0}\right)=0$, that is,

$\frac{{a}_{n}}{k}=\frac{EY\phantom{\rule{0.3em}{0ex}}exp\left\{{\lambda }_{k0}Y\right\}}{E\phantom{\rule{0.3em}{0ex}}exp\left\{{\lambda }_{k0}Y\right\}},$

we know that 0 < λk0≤λk-1,0 ≤ δ0for all 2 ≤ kn.

Proof of Theorem 2.1. As the proof of Proposition 2.1, we also set Y = X - EX, Y i = X i - EX i , and 1 ≤ in. For each fixed n ≥ 2, 1 ≤ kn and any 0 < λ < δ0, it holds that

$\begin{array}{rcl}P\left({\sum }_{i=1}^{k}{Y}_{i}>{a}_{n}\right)& \le & exp\left\{-\lambda {a}_{n}\right\}Eexp\left\{\lambda {\sum }_{i=1}^{k}{Y}_{i}\right\}\\ \le & exp\left\{-\lambda {a}_{n}\right\}{\left(Eexp\left\{\lambda Y\right\}\right)}^{k}\\ =& exp\left\{-{a}_{n}\left(\lambda -\frac{k}{{a}_{n}}logEexp\left\{\lambda Y\right\}\right)\right\}\\ =& exp\left\{-{a}_{n}{f}_{k}\left(\lambda \right)\right\},& \text{(1)}\end{array}$
(3.1)

From (3.1) and Proposition 2.1, we have that there exists a unique 0 < λ k0≤ δ0, such that (2.2), (2.3), (2.4), and (2.5) hold.

Proof of Proposition 2.2. In the proof of Theorem 2.1 of Sung et al. [1], they amplified the inequality (3.1) by their Lemma 2.1, which is proved by using the Hölder inequality, the C r -inequality, and Jensen inequality, respectively. Similarly to Sung et al. [1], we take $\lambda =\frac{\epsilon }{2k}$ and a n = , since X is a non-degenerated random variable, then is strictly amplified, and thus (2.7) holds.

Proof of Proposition 2.3. Write Y = X - EX and g(λ) = E exp{λY}, λ [0, δ0), thus

$g\left(0\right)=1,\phantom{\rule{1em}{0ex}}{g}^{\prime }\left(\lambda \right)=EY\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\},\phantom{\rule{1em}{0ex}}{g}^{\prime }\left(0\right)=EY=0$

and

${g}^{″}\left(\lambda \right)=E{Y}^{2}\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda Y\right\}>0,\phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{g}^{″}\left(0\right)=E{Y}^{2}>0.$

Therefore, g'(λ) is strictly increasing from 0. Combining g'(0) = 0 and g(0) = 1, we have g'(λ) > 0 and g(λ) > 1, and thus g is a strictly increasing function and g(λ) > 1 for all λ > 0.

Proof of Theorem 2.2. For every fixed n ≥ 2 and any 0 < λ < δ, from the standard method and Proposition 2.3, it follows that

$\begin{array}{rcl}P\left(\underset{1\le k\le n}{max}{\sum }_{i=1}^{k}{Y}_{i}>{a}_{n}\right)& \le & {\sum }_{k=1}^{n}P\left({\sum }_{i=1}^{k}{Y}_{i}>{a}_{n}\right)\\ \le & exp\left\{-\lambda {a}_{n}\right\}{\sum }_{k=1}^{n}{\left(Eexp\left\{\lambda Y\right\}\right)}^{k}\\ =& exp\left\{-\lambda {a}_{n}\right\}\frac{{\left(Eexp\left\{\lambda Y\right\}\right)}^{n+1}-Eexp\left\{\lambda Y\right\}}{Eexp\left\{\lambda Y\right\}-1}\\ =& exp\left\{-{a}_{n}{f}_{n}\left(\lambda \right)\right\}\frac{{\left(Eexp\left\{\lambda Y\right\}\right)}^{n+1}-Eexp\left\{\lambda Y\right\}}{\left(Eexp\left\{\lambda Y\right\}-1\right){\left(Eexp\left\{\lambda Y\right\}\right)}^{n}}\\ \equiv & P\left(\lambda \right).& \text{(1)}\end{array}$
(3.2)

By (3.2), Proposition 2.1, and Theorem 2.1, we know that, when λ [0, λ n0 ], the function P (λ) is strictly decreasing; when λ [λ n1 , δ0], the function P (λ) is strictly increasing. In addition, the function P (λ) is a continuous function. Hence, there exists some λ n0 λ0λ10, such that (2.9) holds.

Taking λ = λ0 in (3.2), we get (2.8).

## 4 Furthermore discussions

In this section, we will introduce the concept of widely acceptable random variables in order to extend the results in the previous sections. It is easy to see that the family of acceptable random variables is initiated on the basis of the properties of negatively dependent random variables, and then is also one kind of families of negatively dependent random variables. As everyone knows, in practice, there are also some positively dependent random variables. Therefore, some researchers have been constructing some structures that cover not only common negatively dependent random variables but also positively dependent ones to extend the concept of negative dependence.

Wang et al. [11] introduced the concept of widely dependent random variables.

Say that the random variables {X i : i ≥ 1} are widely upper orthant dependent (WUOD), if there exists a finite real number sequence {g U (n): n ≥ 1}, such that for each n ≥ 1 and for all x i (-∞, ∞), 1 ≤ in,

$P\left(\bigcap _{i=1}^{n}\left\{{X}_{i}>{x}_{i}\right\}\right)\le {g}_{U}\left(n\right)\prod _{i=1}^{n}P\left({X}_{i}>{x}_{i}\right).$

Say that the random variables {X i : i ≥ 1} are widely lower orthant dependent (WLOD), if there exists a finite real number sequence {g L (n): n ≥ 1}, such that for each n ≥ 1 and for all x i (-∞, ∞), 1 ≤ in,

$P\left(\bigcap _{i=1}^{n}\left\{{X}_{i}\le {x}_{i}\right\}\right)\le {g}_{L}\left(n\right)\prod _{i=1}^{n}P\left({X}_{i}\le {x}_{i}\right).$

If the r.v.s {X i : i ≥ 1} are both WUOD and WLOD, we call the random variables are widely orthant dependent(WOD).

If g U (n) = g L (n) = M (≥ 1), then the random variables are called extended negatively upper dependent(ENUOD), extended negatively lower dependent(ENLOD), and extended orthant dependent(ENOD), respectively (see [12]). Especially if M = 1, the random variables are called negatively upper orthant dependent (NUOD), negatively lower orthant dependent (NLOD), and negatively orthant dependent (NOD), respectively (see, for example, [10, 13, 14]).

Wang et al. [11] also presented some properties and examples of widely dependent random variables. Chen et al. [15] obtained the strong law of large numbers for END random variables. Wang and Cheng [16] got some basic renewal theorems for WOD random variables. During the references, Wang et al. [11] pointed out that if the r.v.s {X i : i ≥ 1} are identical distributed and WUOD random variables, then

$E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda \sum _{i=1}^{n}{X}_{i}\right\}\le {g}_{U}\left(n\right)\prod _{i=1}^{n}Eexp\left\{\lambda {X}_{i}\right\}\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}n\ge 1.$
(4.1)

Now, we naturally hope that the family of acceptable random variables can be extended by (4.1).

Say that the random variables {X i : i ≥ 1} are widely acceptable(WA) for δ0 > 0, if for any real 0 < λδ0, there exist positive numbers g(n), n ≥ 1, such that

$E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda \sum _{i=1}^{n}{X}_{i}\right\}\le g\left(n\right)\prod _{i=1}^{n}E\phantom{\rule{0.3em}{0ex}}exp\left\{\lambda {X}_{i}\right\}\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}n\ge 1.$
(4.2)

Especially, if in (4.2), g(n) ≡ M (≥ 1), the r.v.s {X i : i ≥ 1} are extended acceptable (EA).

For WA random variables {X i : i ≥ 1}, obviously, we can get the similar exponential inequalities as that of Theorems 2.1 and 2.2 as long as we add a factor g(n) in the right sides of (2.4) and (2.8). So, we dot not need to mention them one by one.

The following example constructed by Wang et al. [10] can illustrate that widely acceptable random variables properly include acceptable random variables.

Example 4.1. Assume that the random vectors (X2i-1, X2i), and i ≥ 1 are independent, and for each i ≥ 1, the random variables X2i- 1and X2iare dependent according to Farlie-Gumbel-Morgenstern copula with the parameter θ i [-1, 1],

${C}_{{\theta }_{i}}\left(u,\nu \right)=u\nu +{\theta }_{i}u\nu \left(1-u\right)\left(1-\nu \right),\phantom{\rule{1em}{0ex}}\left(u,\nu \right)\in {\left[0,1\right]}^{2},$

which is absolutely continuous with density

${c}_{\theta }\left(u,\nu \right)=\frac{{\partial }^{2}{C}_{{\theta }_{i}}\left(u,\nu \right)}{\partial u\partial \theta }$

(see Example 3.12 of Nelsen[17]).

Denote the common distribution and density of {X i : i ≥ 1} by F and f, respectively. Hence, by Sklar's theorem (see, for example, Chap. 2 of Nelsen [17]), for each i ≥ 1 and any x i , y i (-∞, ∞), it holds that

$\begin{array}{rcl}F\left({x}_{2i-1},{x}_{2i}\right)& =& P\left({X}_{2i-1}\le {x}_{2i-1},{X}_{2i}\le {x}_{2i}\right)\\ =& {C}_{{\theta }_{\nu }}\left(F\left({x}_{2i-1}\right),F\left({x}_{2i}\right)\right)\\ =& F\left({x}_{2i-1}\right)F\left({x}_{2i}\right)\left(1+{\theta }_{i}\overline{F}\left({x}_{2i-1}\right)\overline{F}\left({x}_{2i}\right)\right)\end{array}$

and

$\begin{array}{c}f\left({x}_{2i-1},{x}_{2i}\right)=\frac{{\partial }^{2}F\left({x}_{2i-1},{x}_{2i}\right)}{\partial {x}_{2i-1}\partial {x}_{2i}}\\ =f\left({x}_{2i-1},{x}_{2i}\right)f\left({x}_{2i}\right)\left(1+{\theta }_{i}\left(1-2F\left({x}_{2i-1}\right)\left(1-F\left({x}_{2i}\right)\right)\right).\end{array}$

If E exp{λX1} < ∞, let a = E exp{λX1}, $b={\int }_{-\infty }^{\infty }{\mathsf{\text{e}}}^{\lambda x}F\left(x\right)\mathsf{\text{d}}F\left(x\right)$ and $c={\left(1-\frac{2b}{a}\right)}^{2}$, then by simple calculation, we have

$Eexp\left\{\lambda \left({X}_{2i-1}+{X}_{2i}\right)\right\}={a}^{2}\left(1+c{\theta }_{i}\right).$

Hence, for n = 2m, m ≥ 1,

$Eexp\left\{\lambda \sum _{i=1}^{n}{X}_{i}\right\}={a}^{n}\prod _{i=1}^{\frac{n}{2}}\left(1+c{\theta }_{i}\right).$
(4.3)

Write $g\left(n\right)={\prod }_{i=1}^{\frac{n}{2}}\left(1+c{\theta }_{i}\right)$, obviously the above random variables {X i : i ≥ 1} are widely acceptable, but are not acceptable when θ i > 0, which is resulted from that taking different values for θ i , i ≥ 1 can lead to the corresponding different values for g(n). So, we first give the range of c.

Proposition 4.1 Let the random variable × be non-degenerated, and there exists some λ > 0, such that E exp{λX} < . Then b < a < 2b and 0 < c < 1, where a, b, c is as above.

Proof. Firstly, we prove a < 2b. Let a random variable Y has distribution G satisfying G(x) = F 2 (x), x (-∞, ∞). Then, we obtain from integration by parts that

$\begin{array}{rcl}2b& =& 2\underset{-\infty }{\overset{\infty }{\int }}{e}^{\lambda y}F\left(y\right)dF\left(y\right)\\ =& \underset{-\infty }{\overset{\infty }{\int }}{e}^{\lambda y}dG\left(y\right)\\ =& 1+\lambda \underset{-\infty }{\overset{\infty }{\int }}{e}^{\lambda y}\overline{{F}^{2}}\left({\lambda }^{-1}logy\right)dy& \text{(1)}\end{array}$
(4.4)

and

$\begin{array}{rcl}a& =& Eexp\left\{\lambda X\right\}\\ =& 1+\lambda \underset{0}{\overset{\infty }{\int }}{e}^{\lambda y}\overline{F}\left({\lambda }^{-1}logy\right)dy.& \text{(1)}\end{array}$
(4.5)

Hence, the non-degeneration of X, (4.4), and (4.5) can imply that a < 2b immediately. Subsequently, we show that b < a holds. In fact,

$\begin{array}{rcl}2b-a& =& \lambda \underset{0}{\overset{\infty }{\int }}{e}^{\lambda y}\left(F\left({\lambda }^{-1}logy\right)-{F}^{2}\left({\lambda }^{-1}logy\right)\right)dy\\ =& \lambda \underset{0}{\overset{\infty }{\int }}{e}^{\lambda y}F\left({\lambda }^{-1}logy\right)\overline{F}\left({\lambda }^{-1}logy\right)dy\\ \le & \lambda \underset{0}{\overset{\infty }{\int }}{e}^{\lambda y}\overline{F}\left({\lambda }^{-1}logy\right)dy

Finally, by 0 < 2b - a < a, we get 0 < c < 1.

Now, we assume that ${\theta }_{i}=\frac{1}{{i}^{2}},1\le i\le m,$, then $M={\sum }_{i=1}^{\infty }\left(1+\frac{1}{{i}^{2}}\right)<\infty$, and owing to 0 < c < 1, we have

$g\left(n\right)=\prod _{i=1}^{m}\left(1+\frac{c}{{i}^{2}}\right)

If taking ${\theta }_{i}=\frac{1}{i},1\le i\le m$, then

$\begin{array}{c}\hfill g\left(n\right)=\prod _{i=1}^{m}\left(1+\frac{c}{i}\right)\hfill \\ \hfill \le m+1\hfill \\ \hfill =\frac{n}{2}+1\hfill \end{array}$

If taking θ i [-1, 0], then g(n) ≤ 1, that is, the r.v.s {X i : i ≥ 1} are acceptable.

Obviously, if we take different values for θ i , 1 ≤ im, we will get different values for g(n), and then different kinds of exponential inequalities are obtained, so we do not mention them one by one.

## References

1. Sung SH, Srisuradetchai P, Volodin A: A note on the exponential inequality for a class of dependent random. J Korean Stat Soc 2011, 40: 109–114. 10.1016/j.jkss.2010.08.002

2. Kim TS, Kim HC: On the exponential inequality for negative dependent sequence. Commun Korean Math Soc 2007, 22: 315–321. 10.4134/CKMS.2007.22.2.315

3. Nooghabi HJ, Azarnoosh HA: Exponential inequality for negatively associated random variables. Stat Papers 2009, 50: 419–428. 10.1007/s00362-007-0081-4

4. Sung SH: An exponential inequality for negatively associated random variables. J Ineq Appl 2009, 7. Article ID 649427

5. Xing G: On the exponential inequalities for strictly stationary and negatively associated random variables. J Stat Plan Infer 2009, 139: 3453–3460. 10.1016/j.jspi.2009.03.023

6. Xing G, Yang S, Liu A, Wang X: A remark on the exponential inequality for negatively associated random variables. J Korean Stat Soc 2009, 38: 53–57. 10.1016/j.jkss.2008.06.005

7. Xing G, Yang S: An exponential inequality for strictly stationary and negatively associated random variables. Commun Stat Theor Methods 2010, 39: 340–349.

8. Giuliano Antonini R, Kozachenko Y, Volodin A: Convergence of series of dependent φ -subGaussian random ables. J Math Anal Appl 2008, 338: 1188–1203. 10.1016/j.jmaa.2007.05.073

9. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann Stat 1983,11(1):286–295. 10.1214/aos/1176346079

10. Lehmann E: Some concepts of dependence. Ann Math Stat 1966, 37: 1137–1153. 10.1214/aoms/1177699260

11. Wang K, Wang Y, Gao Q: Uniform asymptotics for the finite-time ruin probability of a new dependent risk model with a constant interest rate. Method Comput Appl Probab 2010.

12. Liu L: Precise large deviations for dependent random variables with heavy tails. Stat Probab Lett 2009, 79: 1290–1298. 10.1016/j.spl.2009.02.001

13. Block HW, Savits TH, Shaked M: Some concepts of negative dependence. Ann Probab 1982, 10: 765–772. 10.1214/aop/1176993784

14. Ebrahimi N, Ghosh M: Multivariate negative dependence. Commun Stat 1981, 10: 307–337. 10.1080/03610928108828041

15. Chen Y, Chen A, Ng KW: The strong law of large numbers for extended negatively dependent random variables. J Appl Probab 2010,47(4):908–922. 10.1239/jap/1294170508

16. Wang Y, Cheng D: Basic renewal theorems for a random walk with widely dependent increments and their applications. J Math Anal Appl 2011, 384: 597–606. 10.1016/j.jmaa.2011.06.010

17. Nelsen RB: An Introduction to Copulas. Second edition. Springer, New York; 2006.

## Acknowledgements

The authors thank the referee and the editor for their very valuable comments on an earlier version of this paper. This research is supported by the National Science Foundation of China (NO. 11071182), and the third author Qingwu Gao's work is supported by Research Start-up Funding for PhD of Nanjing Audit University (NO. NSRC10022).

## Author information

Authors

### Corresponding author

Correspondence to Yuebao Wang.

### Competing interests

The authors declare that they have no competing interests.

### Authors' contributions

The second author YL found the main reference (Sung S.H., Srisuradetchai P. and Volodin A. (2011, J. Korean Stat. Soc.)) of this paper in the literature study, and read it in the first author YW's workshop. Then YW put forward three main problems and some corresponding ideas and methods to the three problems. Finally, the third author QG and YL carried out concretely the above ideas and methods, and accomplished this paper.

## Rights and permissions

Reprints and Permissions

Wang, Y., Li, Y. & Gao, Q. On the exponential inequality for acceptable random variables. J Inequal Appl 2011, 40 (2011). https://doi.org/10.1186/1029-242X-2011-40