• Research
• Open Access

# Some exponential inequalities for acceptable random variables and complete convergence

Journal of Inequalities and Applications20112011:142

https://doi.org/10.1186/1029-242X-2011-142

• Received: 6 July 2011
• Accepted: 22 December 2011
• Published:

## Abstract

Some exponential inequalities for a sequence of acceptable random variables are obtained, such as Bernstein-type inequality, Hoeffding-type inequality. The Bernstein-type inequality for acceptable random variables generalizes and improves the corresponding results presented by Yang for NA random variables and Wang et al. for NOD random variables. Using the exponential inequalities, we further study the complete convergence for acceptable random variables.

MSC(2000): 60E15, 60F15.

## Keywords

• acceptable random variables
• exponential inequality
• complete convergence

## 1 Introduction

Let {X n , n ≥ 1} be a sequence of random variables defined on a fixed probability space $\left(\mathrm{\Omega },\mathcal{F},P\right)$. The exponential inequality for the partial sums ${\sum }_{i=1}^{n}\left({X}_{i}-E{X}_{i}\right)$ plays an important role in various proofs of limit theorems. In particular, it provides a measure of convergence rate for the strong law of large numbers. There exist several versions available in the literature for independent random variables with assumptions of uniform boundedness or some, quite relaxed, control on their moments. If the independent case is classical in the literature, the treatment of dependent variables is more recent.

First, we will recall the definitions of some dependence structure.

Definition 1.1. A finite collection of random variables X1, X2,..., X n is said to be negatively associated (NA) if for every pair of disjoint subsets A1, A2 of {1, 2,..., n},
$Cov\left\{f\left({X}_{i}:i\in {A}_{1}\right),g\left({X}_{j}:j\in {A}_{2}\right)\right\}\le 0,$
(1.1)

whenever f and g are coordinatewise nondecreasing (or coordinatewise nonincreasing) such that this covariance exists. An infinite sequence of random variables {X n , n ≥ 1} is NA if every finite subcollection is NA.

Definition 1.2. A finite collection of random variables X1, X2,..., X n is said to be negatively upper orthant dependent (NUOD) if for all real numbers x1, x2,..., x n ,
$P\left({X}_{i}>{x}_{i},i=1,2,\dots ,n\right)\le \prod _{i=1}^{n}P\left({X}_{i}>{x}_{i}\right),$
(1.2)
and negatively lower orthant dependent (NLOD) if for all real numbers x1, x2,..., x n ,
$P\left({X}_{i}\le {x}_{i},i=1,2,\dots ,n\right)\le \prod _{i=1}^{n}P\left({X}_{i}\le {x}_{i}\right).$
(1.3)

A finite collection of random variables X1, X2,..., X n is said to be negatively orthant dependent (NOD) if they are both NUOD and NLOD. An infinite sequence {X n , n ≥ 1} is said to be NOD if every finite subcollection is NOD.

The concept of NA random variables was introduced by Alam and Saxena  and carefully studied by Joag-Dev and Proschan . Joag-Dev and Proschan  pointed out that a number of well-known multivariate distributions possesses the negative association property, such as multinomial, convolution of unlike multinomial, multivariate hypergeometric, Dirichlet, permutation distribution, negatively correlated normal distribution, random sampling without replacement, and joint distribution of ranks. The notion of NOD random variables was introduced by Lehmann  and developed in Joag-Dev and Proschan . Obviously, independent random variables are NOD. Joag-Dev and Proschan  pointed out that NA random variables are NOD, but neither NUOD nor NLOD implies NA. They also presented an example in which X = (X1, X2, X3, X4) possesses NOD, but does not possess NA. Hence, we can see that NOD is weaker than NA.

Recently, Giuliano et al.  introduced the following notion of acceptability.

Definition 1.3. We say that a finite collection of random variables X1, X2,..., X n is acceptable if for any real λ,
$Eexp\left(\lambda \sum _{i=1}^{n}{X}_{i}\right)\le \prod _{i=1}^{n}Eexp\left(\lambda {X}_{i}\right).$
(1.4)

An infinite sequence of random variables {X n , n ≥ 1} is acceptable if every finite subcollection is acceptable.

Since it is required that the inequality (1.4) holds for all λ, Sung et al.  weakened the condition on λ and gave the following definition of acceptability.

Definition 1.4. We say that a finite collection of random variables X1, X2,..., X n is acceptable if there exists δ > 0 such that for any real λ (-δ, δ),
$Eexp\left(\lambda \sum _{i=1}^{n}{X}_{i}\right)\le \prod _{i=1}^{n}Eexp\left(\lambda {X}_{i}\right).$
(1.5)

An infinite sequence of random variables {X n , n ≥ 1} is acceptable if every finite subcollection is acceptable.

First, we point out that Definition 1.3 of acceptability will be used in the current article. As is mentioned in Giuliano et al. , a sequence of NOD random variables with a finite Laplace transform or finite moment generating function near zero (and hence a sequence of NA random variables with finite Laplace transform, too) provides us an example of acceptable random variables. For example, Xing et al.  consider a strictly stationary NA sequence of random variables. According to the sentence above, a sequence of strictly stationary and NA random variables is acceptable.

Another interesting example of a sequence {Z n , n ≥ 1} of acceptable random variables can be constructed in the following way. Feller [, Problem III.1] (cf. also Romano and Siegel [, Section 4.30]) provides an example of two random variables X and Y such that the density of their sum is the convolution of their densities, yet they are not independent. It is easy to see that X and Y are not negatively dependent either. Since they are bounded, their Laplace transforms E exp(λX) and E exp(λY) are finite for any λ. Next, since the density of their sum is the convolution of their densities, we have
$Eexp\left(\lambda \left(X+Y\right)\right)=Eexp\left(\lambda X\right)Eexp\left(\lambda Y\right).$

The announced sequence of acceptable random variables {Z n , n ≥ 1} can be now constructed in the following way. Let (X k , Y k ) be independent copies of the random vector (X, Y), k ≥ 1. For any n ≥ 1, set Z n = X k if n = 2k + 1 and Z n = Y k if n = 2k. Hence, the model of acceptable random variables that we consider in this article (Definition 1.3) is more general than models considered in the previous literature. Studying the limiting behavior of acceptable random variables is of interest.

Recently, Sung et al.  established an exponential inequality for a random variable with the finite Laplace transform. Using this inequality, they obtained an exponential inequality for identically distributed acceptable random variables which have the finite Laplace transforms. The main purpose of the article is to establish some exponential inequalities for acceptable random variables under very mild conditions. Furthermore, we will study the complete convergence for acceptable random variables using the exponential inequalities.

Throughout the article, let {X n , n ≥ 1} be a sequence of acceptable random variables and denote ${S}_{n}={\sum }_{i=1}^{n}{X}_{i}$ for each n ≥ 1.

Remark 1.1. If {X n , n ≥ 1} is a sequence of acceptable random variables, then {-X n , n ≥ 1} is still a sequence of acceptable random variables. Furthermore, we have for each n ≥ 1,
$\begin{array}{cc}\hfill Eexp\left(\lambda \sum _{i=1}^{n}\left({X}_{i}-E{X}_{i}\right)\right)& =exp\left(-\lambda \sum _{i=1}^{n}E{X}_{i}\right)Eexp\left(\lambda \sum _{i=1}^{n}{X}_{i}\right)\hfill \\ \le \left[\prod _{i=1}^{n}exp\left(-\lambda E{X}_{i}\right)\right]\left[\prod _{i=1}^{n}Eexp\left(\lambda {X}_{i}\right)\right]\hfill \\ =\prod _{i=1}^{n}Eexp\left(\lambda \left({X}_{i}-E{X}_{i}\right)\right).\hfill \end{array}$

Hence, {X n - EX n , n ≥ 1} is also a sequence of acceptable random variables.

The following lemma is useful.

Lemma 1.1. If X is a random variable such that aXb, where a and b are finite real numbers, then for any real number h,
$E{e}^{hX}\le \frac{b-EX}{b-a}{e}^{ha}+\frac{EX-a}{b-a}{e}^{hb}.$
(1.6)
Proof. Since the exponential function exp(hX) is convex, its graph is bounded above on the interval aXb by the straight line which connects its ordinates at X = a and X = b. Thus
${e}^{hX}\le \frac{{e}^{hb}-{e}^{ha}}{b-a}\left(X-a\right)+{e}^{ha}=\frac{b-X}{b-a}{e}^{ha}+\frac{X-a}{b-a}{e}^{hb},$

which implies (1.6).

The rest of the article is organized as follows. In Section 2, we will present some exponential inequalities for a sequence of acceptable random variables, such as Bernstein-type inequality, Hoeffding-type inequality. The Bernstein-type inequality for acceptable random variables generalizes and improves the corresponding results of Yang  for NA random variables and Wang et al.  for NOD random variables. In Section 3, we will study the complete convergence for acceptable random variables using the exponential inequalities established in Section 2.

## 2 Exponential inequalities for acceptable random variables

In this section, we will present some exponential inequalities for acceptable random variables, such as Bernstein-type inequality and Hoeffding-type inequality.

Theorem 2.1. Let {X n , n ≥ 1} be a sequence of acceptable random variables with EX i = 0 and $E{X}_{i}^{2}={\sigma }_{i}^{2}<\infty$for each i ≥ 1. Denote ${B}_{n}^{2}={\sum }_{i=1}^{n}{\sigma }_{i}^{2}$for each n ≥ 1. If there exists a positive number c such that |X i | ≤ cB n for each 1 ≤ in, n ≥ 1, then for any ε > 0,
$P\left({S}_{n}∕{B}_{n}\ge \epsilon \right)\le \left\{\begin{array}{cc}\hfill exp\left[-\frac{{\epsilon }^{2}}{2}\left(1-\frac{\epsilon c}{2}\right)\right]\hfill & \hfill if\epsilon c\le 1,\hfill \\ \hfill exp\left(-\frac{\epsilon }{4c}\right)\hfill & \hfill if\epsilon c\ge 1.\hfill \end{array}\right\$
(2.1)
Proof. For fixed n ≥ 1, take t > 0 such that tcB n ≤ 1. It is easily seen that
$\mid E{X}_{i}^{k}\mid \phantom{\rule{2.77695pt}{0ex}}\le {\left(c{B}_{n}\right)}^{k-2}E{X}_{i}^{2},\phantom{\rule{1em}{0ex}}k\ge 2.$
Hence,
$\begin{array}{cc}\hfill E{e}^{t{X}_{i}}& =1+\sum _{k=2}^{\infty }\frac{{t}^{k}}{k!}E{X}_{i}^{k}\le 1+\frac{{t}^{2}}{2}E{X}_{i}^{2}\left(1+\frac{t}{3}c{B}_{n}+\frac{{t}^{2}}{12}{c}^{2}{B}_{n}^{2}+\cdots \right)\hfill \\ \le 1+\frac{{t}^{2}}{2}E{X}_{i}^{2}\left(1+\frac{t}{2}c{B}_{n}\right)\le exp\left[\frac{{t}^{2}}{2}E{X}_{i}^{2}\left(1+\frac{t}{2}c{B}_{n}\right)\right].\hfill \end{array}$
By Definition 1.3 and the inequality above, we have
$E{e}^{t{S}_{n}}=E\left(\prod _{i=1}^{n}{e}^{t{X}_{i}}\right)\le \prod _{i=1}^{n}E{e}^{t{X}_{i}}\le exp\left[\frac{{t}^{2}}{2}{B}_{n}^{2}\left(1+\frac{t}{2}c{B}_{n}\right)\right],$
which implies that
$P\left({S}_{n}∕{B}_{n}\ge \epsilon \right)\le exp\left[-t\epsilon {B}_{n}+\frac{{t}^{2}}{2}{B}_{n}^{2}\left(1+\frac{t}{2}c{B}_{n}\right)\right].$
(2.2)

We take $t=\frac{\epsilon }{{B}_{n}}$ when εc ≤ 1, and take $t=\frac{1}{c{B}_{n}}$ when εc > 1. Thus, the desired result (2.1) can be obtained immediately from (2.2).

Theorem 2.2. Let {X n , n ≥ 1} be a sequence of acceptable random variables with EX i = 0 and |X i | ≤ b for each i ≥ 1, where b is a positive constant. Denote ${\sigma }_{i}^{2}=E{X}_{i}^{2}$and ${B}_{n}^{2}={\sum }_{i=1}^{n}{\sigma }_{i}^{2}$for each n ≥ 1. Then, for any ε > 0,
$P\left({S}_{n}\ge \epsilon \right)\le exp\left\{-\frac{{\epsilon }^{2}}{2{B}_{n}^{2}+\frac{2}{3}b\epsilon }\right\}$
(2.3)
and
$P\left(\mid {S}_{n}\mid \ge \epsilon \right)\le 2exp\left\{-\frac{{\epsilon }^{2}}{2{B}_{n}^{2}+\frac{2}{3}b\epsilon }\right\}.$
(2.4)
Proof. For any t > 0, by Taylor's expansion, EX i = 0 and the inequality 1 + xe x , we can get that for i = 1, 2,..., n,
$\begin{array}{cc}\hfill Eexp\left\{t{X}_{i}\right\}& =1+\sum _{j=2}^{\infty }\frac{E{\left(t{X}_{i}\right)}^{j}}{j!}\le 1+\sum _{j=2}^{\infty }\frac{{t}^{j}E\mid {X}_{i}{\mid }^{j}}{j!}\hfill \\ =1+\frac{{t}^{2}{\sigma }_{i}^{2}}{2}\sum _{j=2}^{\infty }\frac{{t}^{j-2}E\mid {X}_{i}{\mid }^{j}}{\frac{1}{2}{\sigma }_{i}^{2}j!}\hfill \\ =1+\frac{{t}^{2}{\sigma }_{i}^{2}}{2}{F}_{i}\left(t\right)\le exp\left\{\frac{{t}^{2}{\sigma }_{i}^{2}}{2}{F}_{i}\left(t\right)\right\},\hfill \end{array}$
(2.5)
where
${F}_{i}\left(t\right)=\sum _{j=2}^{\infty }\frac{{t}^{j-2}E\mid {X}_{i}{\mid }^{j}}{\frac{1}{2}{\sigma }_{i}^{2}j!},\phantom{\rule{1em}{0ex}}i=1,2,\dots ,n.$
Denote C = b/ 3 and ${M}_{n}=\frac{b\epsilon }{3{B}_{n}^{2}}+1$. Choosing t > 0 such that tC < 1 and
$tC\le \frac{{M}_{n}-1}{{M}_{n}}=\frac{C\epsilon }{C\epsilon +{B}_{n}^{2}}.$
It is easy to check that for i = 1, 2,..., n and j ≥ 2,
$E\mid {X}_{i}{\mid }^{j}\le {\sigma }_{i}^{2}{b}^{j-2}\le \frac{1}{2}{\sigma }_{i}^{2}{C}^{j-2}j!,$
which implies that for i = 1, 2,..., n,
${F}_{i}\left(t\right)=\sum _{j=2}^{\infty }\frac{{t}^{j-2}E\mid {X}_{i}{\mid }^{j}}{\frac{1}{2}{\sigma }_{i}^{2}j!}\le \sum _{j=2}^{\infty }{\left(tC\right)}^{j-2}={\left(1-tC\right)}^{-1}\le {M}_{n}.$
(2.6)
By Markov's inequality, Definition 1.3, (2.5) and (2.6), we can get
$P\left({S}_{n}\ge \epsilon \right)\le {e}^{-t\epsilon }Eexp\left\{t{S}_{n}\right\}\le {e}^{-t\epsilon }\prod _{i=1}^{n}Eexp\left\{t{X}_{i}\right\}\le exp\left\{-t\epsilon +\frac{{t}^{2}{B}_{n}^{2}}{2}{M}_{n}\right\}.$
(2.7)
Taking $t=\frac{\epsilon }{{B}_{n}^{2}{M}_{n}}=\frac{\epsilon }{C\epsilon +{B}_{n}^{2}}$. It is easily seen that tC < 1 and $tC=\frac{C\epsilon }{C\epsilon +{B}_{n}^{2}}$. Substituting $t=\frac{\epsilon }{{B}_{n}^{2}{M}_{n}}$ into the right-hand side of (2.7), we can obtain (2.3) immediately. By (2.3), we have
$P\left({S}_{n}\le -\epsilon \right)=P\left(-{S}_{n}\ge \epsilon \right)\le exp\left\{-\frac{{\epsilon }^{2}}{2{B}_{n}^{2}+\frac{2}{3}b\epsilon }\right\},$
(2.8)

since {-X n , n ≥ 1} is still a sequence of acceptable random variables. The desired result (2.4) follows from (2.3) and (2.8) immediately.   □

Remark 2.1. By Theorem 2.2, we can get that for any t > 0,
$P\left(\mid {S}_{n}\mid \ge nt\right)\le 2exp\left\{-\frac{{n}^{2}{t}^{2}}{2{B}_{n}^{2}+\frac{2}{3}bnt}\right\}$
and
$P\left(\mid {S}_{n}\mid \ge {B}_{n}t\right)\le 2exp\left\{-\frac{{t}^{2}}{2+\frac{2}{3}\cdot \frac{bt}{{B}_{n}}}\right\}.$
It is well known that the upper bound of P (|S n | ≥ nt) is also $2exp\left\{-\frac{{n}^{2}{t}^{2}}{2{B}_{n}^{2}+\frac{2}{3}bnt}\right\}$. So Theorem 2.3 extends corresponding results for independent random variables without necessarily adding any extra conditions. In addition, it is easy to check that
$exp\left\{-\frac{{\epsilon }^{2}}{2{B}_{n}^{2}+\frac{2}{3}b\epsilon }\right\}

which implies that our Theorem 2.2 generalizes and improves the corresponding results of Yang [9, Lemma 3.5] for NA random variables and Wang et al. [10, Theorem 2.3] for NOD random variables.

In the following, we will provide the Hoeffding-type inequality for acceptable random variables.

Theorem 2.3. Let {X n , n ≥ 1} be a sequence of acceptable random variables. If there exist two sequences of real numbers {a n , n ≥ 1} and {b n , n ≥ 1} such that a i X i b i for each i ≥ 1, then for any ε > 0 and n ≥ 1,
$P\left({S}_{n}-E{S}_{n}\ge \epsilon \right)\le exp\left\{-\frac{2{\epsilon }^{2}}{{\sum }_{i=1}^{n}{\left({b}_{i}-{a}_{i}\right)}^{2}}\right\},$
(2.9)
$P\left({S}_{n}-E{S}_{n}\le -\epsilon \right)\le exp\left\{-\frac{2{\epsilon }^{2}}{{\sum }_{i=1}^{n}{\left({b}_{i}-{a}_{i}\right)}^{2}}\right\},$
(2.10)
and
$P\left(\mid {S}_{n}-E{S}_{n}\mid \ge \epsilon \right)\le 2exp\left\{-\frac{2{\epsilon }^{2}}{{\sum }_{i=1}^{n}{\left({b}_{i}-{a}_{i}\right)}^{2}}\right\}.$
(2.11)
Proof. For any h > 0, by Markov's inequality, we can see that
$P\left({S}_{n}-E{S}_{n}\ge \epsilon \right)\le E{e}^{h\left({S}_{n}-E{S}_{n}-\epsilon \right)}.$
(2.12)
It follows from Remark 1.1 that
$E{e}^{h\left({S}_{n}-E{S}_{n}-\epsilon \right)}={e}^{-h\epsilon }E\left(\prod _{i=1}^{n}{e}^{h\left({X}_{i}-E{X}_{i}\right)}\right)\le {e}^{-h\epsilon }\prod _{i=1}^{n}E{e}^{h\left({X}_{i}-E{X}_{i}\right)}.$
(2.13)
Denote EX i = μ i for each i ≥ 1. By a i X i b i and Lemma 1.1, we have
$E{e}^{h\left({X}_{i}-E{X}_{i}\right)}\le {e}^{-h{\mu }_{i}}\left(\frac{{b}_{i}-{\mu }_{i}}{{b}_{i}-{a}_{i}}{e}^{h{a}_{i}}+\frac{{\mu }_{i}-{a}_{i}}{{b}_{i}-{a}_{i}}{e}^{h{b}_{i}}\right)\doteq {e}^{L\left({h}_{i}\right)},$
(2.14)
where
$L\left({h}_{i}\right)=-{h}_{i}{p}_{i}+ln\left(1-{p}_{i}+{p}_{i}{e}^{{h}_{i}}\right),\phantom{\rule{1em}{0ex}}{h}_{i}=h\left({b}_{i}-{a}_{i}\right),\phantom{\rule{1em}{0ex}}{p}_{i}=\frac{{\mu }_{i}-{a}_{i}}{{b}_{i}-{a}_{i}}.$
The first two derivatives of L(h i ) with respect to h i are
${L}^{\prime }\left({h}_{i}\right)=-{p}_{i}+\frac{{p}_{i}}{\left(1-{p}_{i}\right){e}^{-{h}_{i}}+{p}_{i}},\phantom{\rule{1em}{0ex}}{L}^{″}\left({h}_{i}\right)=\frac{{p}_{i}\left(1-{p}_{i}\right){e}^{-{h}_{i}}}{{\left[\left(1-{p}_{i}\right){e}^{-{h}_{i}}+{p}_{i}\right]}^{2}}.$
(2.15)
The last ratio is of the form u(1 - u), where 0 < u < 1. Hence,
${L}^{″}\left({h}_{i}\right)=\frac{\left(1-{p}_{i}\right){e}^{-{h}_{i}}}{\left(1-{p}_{i}\right){e}^{-{h}_{i}}+{p}_{i}}\left(1-\frac{\left(1-{p}_{i}\right){e}^{-{h}_{i}}}{\left(1-{p}_{i}\right){e}^{-{h}_{i}}+{p}_{i}}\right)\le \frac{1}{4}.$
(2.16)
Therefore, by Taylor's expansion and (2.16), we can get
$L\left({h}_{i}\right)\le L\left(0\right)+{L}^{\prime }\left(0\right){h}_{i}+\frac{1}{8}{h}_{i}^{2}=\frac{1}{8}{h}_{i}^{2}=\frac{1}{8}{h}^{2}{\left({b}_{i}-{a}_{i}\right)}^{2}.$
(2.17)
By (2.12), (2.13), and (2.17), we have
$P\left({S}_{n}-E{S}_{n}\ge \epsilon \right)\le exp\left\{-h\epsilon +\frac{1}{8}{h}^{2}\sum _{i=1}^{n}{\left({b}_{i}-{a}_{i}\right)}^{2}\right\}.$
(2.18)

It is easily seen that the right-hand side of (2.18) has its minimum at $h=\frac{4\epsilon }{{\sum }_{i=1}^{n}{\left({b}_{i}-{a}_{i}\right)}^{2}}$. Inserting this value in (2.18), we can obtain (2.9) immediately. Since {-X n , n ≥ 1} is a sequence of acceptable random variables, (2.9) implies (2.10). Therefore, (2.11) follows from (2.9) and (2.10) immediately. This completes the proof of the theorem.

## 3 Complete convergence for acceptable random variables

In this section, we will present some complete convergence for a sequence of acceptable random variables. The concept of complete convergence was introduced by Hsu and Robbins  as follows. A sequence of random variables {U n , n ≥ 1} is said to converge completely to a constant C if ${\sum }_{n=1}^{\infty }P\left(\mid {U}_{n}-C\mid \phantom{\rule{2.77695pt}{0ex}}>\epsilon \right)<\infty$ for all ε > 0. In view of the Borel-Cantelli lemma, this implies that U n C almost surely (a.s.). The converse is true if the {U n , n ≥ 1} are independent. Hsu and Robbins  proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös  proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors.

Define the space of sequences
$\mathcal{H}=\left\{\left\{{b}_{n}\right\}:\sum _{n=1}^{\infty }{h}^{{b}_{n}}<\infty \phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{every}}\phantom{\rule{2.77695pt}{0ex}}0

The following results are based on the space of sequences $\mathcal{H}$.

Theorem 3.1. Let {X n , n ≥ 1} be a sequence of acceptable random variables with EX i = 0 and |X i | ≤ b for each i ≥ 1, where b is a positive constant. Assume that ${\sum }_{i=1}^{n}E{X}_{i}^{2}=O\left({b}_{n}\right)$for some $\left\{{b}_{n}\right\}\in \mathcal{H}$. Then,
${b}_{n}^{-1}{S}_{n}\to 0\phantom{\rule{2.77695pt}{0ex}}completely\phantom{\rule{1em}{0ex}}as\phantom{\rule{2.77695pt}{0ex}}n\to \infty .$
(3.1)
Proof. For any ε > 0, it follows from Theorem 2.2 that
$\begin{array}{cc}\hfill \sum _{n=1}^{\infty }P\left(\mid {S}_{n}\mid \ge {b}_{n}\epsilon \right)& \le 2\sum _{n=1}^{\infty }exp\left\{-\frac{{b}_{n}^{2}{\epsilon }^{2}}{2{\sum }_{i=1}^{n}E{X}_{i}^{2}+\frac{2}{3}b{b}_{n}\epsilon }\right\}\hfill \\ \le 2\sum _{n=1}^{\infty }exp\left\{-C{b}_{n}\right\}<\infty ,\hfill \end{array}$

which implies (3.1). Here, C is a positive number not depending on n.   □

Theorem 3.2. Let {X n , n ≥ 1} be a sequence of acceptable random variables with |X i | ≤ c <for each i ≥ 1, where c is a positive constant. Then, for every $\left\{{b}_{n}\right\}\in \mathcal{H}$,
${\left({b}_{n}n\right)}^{-1∕2}\left({S}_{n}-E{S}_{n}\right)\to 0\phantom{\rule{0.3em}{0ex}}completely\phantom{\rule{1em}{0ex}}as\phantom{\rule{2.77695pt}{0ex}}n\to \infty .$
(3.2)
Proof. For any ε > 0, it follows from Theorem 2.3 that
$\sum _{n=1}^{\infty }P\left(\mid {S}_{n}-E{S}_{n}\mid \ge {\left({b}_{n}n\right)}^{1∕2}\epsilon \right)\le 2\sum _{n=1}^{\infty }{\left[exp\left(-\frac{{\epsilon }^{2}}{2{c}^{2}}\right)\right]}^{{b}_{n}}<\infty ,$

which implies (3.2).   □

Theorem 3.3. Let {X n , n ≥ 1} be a sequence of acceptable random variables with EX i = 0 and $E{X}_{i}^{2}={\sigma }_{i}^{2}<\infty$ for each i≥ 1. Denote ${B}_{n}^{2}={\sum }_{i=1}^{n}{\sigma }_{i}^{2}$ for each n≥ 1. For fixed n ≥ 1, there exists a positive number H such that
$\mid E{X}_{i}^{m}\mid \phantom{\rule{2.77695pt}{0ex}}\le \frac{m!}{2}{\sigma }_{i}^{2}{H}^{m-2},\phantom{\rule{1em}{0ex}}i=1,2,\dots ,n$
(3.3)
for any positive integer m ≥ 2. Then,
${b}_{n}^{-1}{S}_{n}\to 0\phantom{\rule{2.77695pt}{0ex}}completely\phantom{\rule{1em}{0ex}}as\phantom{\rule{2.77695pt}{0ex}}n\to \infty ,$
(3.4)

provided that $\left\{{b}_{n}^{2}∕{B}_{n}^{2}\right\}\in \mathcal{H}$ and $\left\{{b}_{n}\right\}\in \mathcal{H}$.

Proof. By (3.3), we can see that
$E{e}^{t{X}_{i}}=1+\frac{{t}^{2}}{2}{\sigma }_{i}^{2}+\frac{{t}^{3}}{6}E{X}_{i}^{3}+\cdots \le 1+\frac{{t}^{2}}{2}{\sigma }_{i}^{2}\left(1+H\mid t\mid +{H}^{2}{t}^{2}+\cdots \right)$
for i = 1, 2,..., n, n ≥ 1. When $\mid t\mid \le \frac{1}{2H}$, it follows that
$E{e}^{t{X}_{i}}\le 1+\frac{{t}^{2}{\sigma }_{i}^{2}}{2}\phantom{\rule{2.77695pt}{0ex}}\cdot \phantom{\rule{2.77695pt}{0ex}}\frac{1}{1-H\mid t\mid }\le 1+{t}^{2}{\sigma }_{i}^{2}\le {e}^{{t}^{2}{\sigma }_{i}^{2}},\phantom{\rule{1em}{0ex}}i=1,2,\dots ,n.$
(3.5)
Therefore, by Markov's inequality, Definition 1.3 and (3.5), we can get that for any x ≥ 0 and $\mid t\mid \le \frac{1}{2H}$,
$\begin{array}{cc}\hfill P\left(\mid \sum _{i=1}^{n}{X}_{i}\mid \ge x\right)& =P\left(\sum _{i=1}^{n}{X}_{i}\ge x\right)+P\left(\sum _{i=1}^{n}\left(-{X}_{i}\right)\ge x\right)\hfill \\ \le {e}^{-\mid t\mid x}Eexp\left\{\mid t\mid \sum _{i=1}^{n}{X}_{i}\right\}+{e}^{-\mid t\mid x}Eexp\left\{\mid t\mid \sum _{i=1}^{n}\left(-{X}_{i}\right)\right\}\hfill \\ \le {e}^{-tx}Eexp\left\{\mid t\mid \sum _{i=1}^{n}{X}_{i}\right\}+{e}^{-tx}Eexp\left\{\mid t\mid \sum _{i=1}^{n}\left(-{X}_{i}\right)\right\}\hfill \\ ={e}^{-tx}Eexp\left\{t\sum _{i=1}^{n}{X}_{i}\right\}+{e}^{-tx}Eexp\left\{t\sum _{i=1}^{n}\left(-{X}_{i}\right)\right\}\hfill \\ \le {e}^{-tx}\left(\prod _{i=1}^{n}E{e}^{t{X}_{i}}+\prod _{i=1}^{n}E{e}^{-t{X}_{i}}\right)\hfill \\ \le 2exp\left\{-tx+{t}^{2}{B}_{n}^{2}\right\}.\hfill \end{array}$
Hence,
$P\left(\mid \sum _{i=1}^{n}{X}_{i}\mid \ge x\right)\le \underset{\mid t\mid \le \frac{1}{2H}}{2min}\phantom{\rule{2.77695pt}{0ex}}exp\left\{-tx+{t}^{2}{B}_{n}^{2}\right\}.$
If $0\le x\le \frac{{B}_{n}^{2}}{H}$, then
$\underset{\mid t\mid \le \frac{1}{2H}}{min}\phantom{\rule{2.77695pt}{0ex}}exp\left\{-tx+{t}^{2}{B}_{n}^{2}\right\}=exp\left\{-\frac{x}{2{B}_{n}^{2}}x+\frac{{x}^{2}}{4{B}_{n}^{4}}{B}_{n}^{2}\right\}=exp\left\{-\frac{{x}^{2}}{4{B}_{n}^{2}}\right\};$
if $x\ge \frac{{B}_{n}^{2}}{H}$, then
$\underset{\mid t\mid \le \frac{1}{2H}}{min}\phantom{\rule{2.77695pt}{0ex}}exp\left\{-tx+{t}^{2}{B}_{n}^{2}\right\}=exp\left\{-\frac{1}{2H}x+\frac{1}{4{H}^{2}}{B}_{n}^{2}\right\}\le exp\left\{-\frac{x}{4H}\right\}.$
From the statements above, we can get that
$P\left(\mid \sum _{i=1}^{n}{X}_{i}\mid \ge x\right)\le \left\{\begin{array}{cc}\hfill 2{e}^{-\frac{{x}^{2}}{4{B}_{n}^{2}}},\hfill & \hfill 0\le x\le \frac{{B}_{n}^{2}}{H},\hfill \\ \hfill 2{e}^{-\frac{x}{4H}},\hfill & \hfill x\ge \frac{{B}_{n}^{2}}{H},\hfill \end{array}\right\$
which implies that for any x ≥ 0,
$P\left(\mid \sum _{i=1}^{n}{X}_{i}\mid \ge x\right)\le 2exp\left\{-\frac{{x}^{2}}{4{B}_{n}^{2}}\right\}+2exp\left\{-\frac{x}{4H}\right\}.$
Therefore, the assumptions of {b n } yield that
$\sum _{n=1}^{\infty }P\left(\mid \frac{1}{{b}_{n}}\sum _{i=1}^{n}{X}_{i}\mid \ge \epsilon \right)\le 2\sum _{n=1}^{\infty }exp\left\{-\frac{{b}_{n}^{2}{\epsilon }^{2}}{4{B}_{n}^{2}}\right\}+2\sum _{n=1}^{\infty }exp\left\{-\frac{{b}_{n}\epsilon }{4H}\right\}<\infty .$

This completes the proof of the theorem.   □

## Declarations

### Acknowledgements

The authors are most grateful to the editor and anonymous referee for the careful reading of the manuscript and valuable suggestions which helped in significantly improving an earlier version of this article.

The study was supported by the National Natural Science Foundation of China (11171001, 71071002, 11126176) and the Academic Innovation Team of Anhui University (KJTD001B).

## Authors’ Affiliations

(1)
School of Mathematical Science, Anhui University, Hefei, 230039, China
(2)
Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2, Canada

## References 