Skip to main content

Conditional acceptability of random variables

Abstract

Acceptable random variables introduced by Giuliano Antonini et al. (J. Math. Anal. Appl. 338:1188-1203, 2008) form a class of dependent random variables that contains negatively dependent random variables as a particular case. The concept of acceptability has been studied by authors under various versions of the definition, such as extended acceptability or wide acceptability. In this paper, we combine the concept of acceptability with the concept of conditioning, which has been the subject of current research activity. For conditionally acceptable random variables, we provide a number of probability inequalities that can be used to obtain asymptotic results.

1 Introduction

Let \(\{X_{n},n\in\mathbb{N}\}\) be a sequence of random variables defined on a probability space \((\Omega, \mathcal{A},\mathcal{P})\). Giuliano Antonini et al. [1] introduced the concept of acceptable random variables as follows.

Definition 1

A finite collection of random variables \(X_{1},X_{2},\ldots,X_{n}\) is said to be acceptable if for any real λ,

$$E\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr)\leq\prod_{i=1}^{n}E\exp (\lambda X_{i}). $$

An infinite sequence of random variables \(\{X_{n},n\in\mathbb{N}\}\) is acceptable if every finite subcollection is acceptable.

The class of acceptable random variables includes in a trivial way collections of independent random variables. But in most cases, acceptable random variables exhibit a form of negative dependence. In fact, as Giuliano Antonini et al. [1] point out, negatively associated random variables with a finite Laplace transform satisfy the notion of acceptability. However, acceptable random variables do not have to be negatively dependent. A classical example of acceptable random variables that are not negatively dependent can be constructed based on problem III.1 listed in the classical book of Feller [2]. Details can be found in Giuliano Antonini et al. [1], Shen et al. [3], or Sung et al. [4].

The idea of acceptability has been modified or generalized in certain directions. For example, Giuliano Antonini et al. [1] introduced the concept of m-acceptable random variables, whereas other authors provided weaker versions such as the notions of extended acceptability or wide acceptability. The following definition given by Sung et al. [4] provides a weaker version of acceptability by imposing a condition on λ.

Definition 2

A finite collection of random variables \(X_{1},X_{2},\ldots ,X_{n}\) is said to be acceptable if there exists \(\delta>0\) such that, for any real \(|\lambda|<\delta\),

$$E\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr)\leq\prod_{i=1}^{n}E\exp (\lambda X_{i}). $$

An infinite sequence of random variables \(\{X_{n},n\in\mathbb{N}\}\) is acceptable if every finite sub-collection is acceptable.

The concept of acceptable random variables has been studied extensively by a few authors, and a number of results are available in the literature such as exponential inequalities and complete convergence results. For the interested reader, we suggest the papers of Giuliano Antonini et al. [1], Sung et al. [4], Wang et al. [5], Shen et al. [3], among others.

Further, in addition to the definition of acceptability, Choi and Baek [6] introduced the concept of extended acceptability as follows.

Definition 3

A finite collection of random variables \(X_{1},X_{2},\ldots,X_{n}\) is said to be extended acceptable if there exists a constant \(M>0\) such that, for any real λ,

$$E\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr)\leq M \prod_{i=1}^{n}E\exp (\lambda X_{i}). $$

An infinite sequence of random variables \(\{X_{n},n\in\mathbb{N}\}\) is extended acceptable if every finite subcollection is extended acceptable.

It is clear that acceptable random variables are extended acceptable. The following example provides a sequence of random variables that satisfies the notion of extended acceptability.

Example 4

Let \(\{X_{n}, n\in\mathbb{N}\}\) be absolutely continuous random variables with the same distributions \(F(x)\) and densities \(f(x)\) such that the finite-dimensional distributions are given by the copula

$$C(u_{1},\ldots,u_{k}) = \Biggl(\prod _{i=1}^{k}u_{i} \Biggr)\exp \Biggl(\beta \prod_{i=1}^{k}(1-u_{i}) \Biggr), \quad k\geq2. $$

This copula can be equivalently represented as

$$C(u_{1},\ldots,u_{k})= \prod_{i=1}^{k}u_{i}+ \sum_{n=1}^{\infty}\frac{\beta^{n} \prod_{i=1}^{k}u_{i}(1-u_{i})^{n}}{n !}. $$

The density of C can be obtained by the formula

$$c(u_{1},\ldots,c_{k}) = \frac{\partial^{k} C}{\partial u_{1}\, \partial u_{2} \cdots \, \partial u_{k}}, $$

and it can be proven that, for \(|\beta|\leq\log2\),

$$0\leq2-\exp{\bigl( \vert \beta \vert \bigr)}= 1- \sum _{n=1}^{\infty}\frac{\vert \beta \vert ^{n}}{n !}\leq c(u_{1}, \ldots,u_{k})\leq\exp{\bigl(\vert \beta \vert \bigr)}. $$

Furthermore, \(\int_{0}^{1}\cdots\int_{0}^{1}c(u_{1},\ldots,u_{k}) \, du_{1}\cdots \, du_{k}=1\), which shows that \(c(u_{1},\ldots,u_{k})\) is a density function. It is known that

$$f_{\mathbf{X}}(x_{1},\ldots,x_{n}) = c \bigl(F(x_{1}),\ldots,F(x_{n})\bigr)f(x_{1})\cdots f(x_{n}), $$

which leads to

$$f_{\mathbf{X}}(x_{1},\ldots,x_{n}) \leq\exp{\bigl(\vert \beta \vert \bigr)} f(x_{1})\cdots f(x_{n}). $$

Hence,

$$\begin{aligned} E \Biggl[\exp{ \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr)} \Biggr] =& \int \cdots \int\exp{ \Biggl(\lambda\sum_{i=1}^{n}x_{i} \Biggr)}f_{\mathbf {X}}(x_{1},\ldots,x_{n})\, dx_{1}\cdots\, dx_{n} \\ \leq& \exp{\bigl(\vert \beta \vert \bigr)} \int\cdots \int\exp{ \Biggl(\lambda\sum_{i=1}^{n}x_{i} \Biggr)} f(x_{1})\cdots f(x_{n})\, dx_{1}\cdots \, dx_{n} \\ =&\exp{\bigl(\vert \beta \vert \bigr)}\prod_{i=1}^{n} \int\exp{ (\lambda x_{i} )} f(x_{i})\, dx_{i} \\ =&\exp{\bigl(\vert \beta \vert \bigr)}\prod_{i=1}^{n} E\bigl[\exp{ (\lambda X_{i} )}\bigr], \end{aligned}$$

proving that the random variables satisfy the definition of extended acceptability for \(M = \exp{(|\beta|)}>0\).

Observe that \(\{X_{n},n\in\mathbb{N}\}\) is a strictly stationary sequence that is negatively dependent for \(\beta<0\) and positively dependent for \(\beta>0\).

For the class of extended acceptable random variables, Choi and Baek [6] provide an exponential inequality that enables the derivation of asymptotic results based on complete convergence.

A different version of acceptability, the notion of wide acceptability, is provided by Wang et al. [5].

Definition 5

Random variables \(\{X_{n},n\in\mathbb{N}\}\) are widely acceptable for \(\delta_{0}>0\) if for any real \(0<\lambda<\delta_{0}\), there exist positive numbers \(g(n)\), \(n\geq1\), such that

$$E\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr)\leq g(n) \prod_{i=1}^{n}E\exp ( \lambda X_{i})\quad \mbox{for all }n\geq1. $$

The following example gives random variables that satisfy the definition of wide acceptability.

Example 6

Consider a sequence \(\{X_{n}, n\in N\}\) of random variables with the same absolutely continuous distribution function \(F(x)\). Assume that the joint distribution of \((X_{i_{1}},\ldots,X_{i_{n}})\) is given by

$$F_{i_{1},\ldots,i_{n}}(x_{1},\ldots,x_{n}) = \prod _{k=1}^{n}F(x_{k}) \biggl(1+\sum _{1\leq j< k\leq n}a_{i_{j}i_{k}}\bigl(1-F(x_{j})\bigr) \bigl(1-F(x_{k})\bigr) \biggr), $$

provided that \(|\sum_{1\leq i< j\leq n} a_{ij} |\leq1\). Then it can be proven that

$$f_{\mathbf{X}}(x_{1},x_{2},\ldots,x_{n})\leq \biggl(1+\biggl\vert \sum_{1\leq i< j\leq n} a_{ij} \biggr\vert \biggr)f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}), $$

which leads to

$$E \Biggl[\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr) \Biggr]\leq \biggl(1+\biggl\vert \sum_{1\leq i< j\leq n} a_{ij}\biggr\vert \biggr)\prod_{i=1}^{n}E \bigl[e^{\lambda X_{i}}\bigr], $$

proving that \(\{X_{n}, n\in\mathbb{N}\}\) is a sequence of widely extended random variables with \(g(n) = 1+ |\sum_{1\leq i< j\leq n} a_{ij} | \).

The concept of widely acceptable random variables follows naturally from the concept of wide dependence of random variables introduced by Wang et al. [7]. Wang et al. [8] and Wang et al. [7] stated (without proof) that, for widely orthant dependent random variables, the inequality in Definition 5 is true for any λ. For widely acceptable random variables, Wang et al. [5] pointed out, although did not provide the details, that one can get exponential inequalities similar to those obtained for acceptable random variables.

In this paper, we combine the concept of conditioning on a σ-algebra with the concept of acceptability (in fact, wide acceptability) and define conditionally acceptable random variables. In Section 2.1, we give the basic definitions and examples and prove some classical exponential inequalities. In Section 2.2, we provide asymptotic results by utilizing the tools of Section 2.1. Finally, in Section 3, we give some concluding remarks.

2 Results and discussion

Recently, various researchers have studied extensively the concepts of conditional independence and conditional association (see, e.g., Chow and Teicher [9], Majerak et al. [10], Roussas [11], and Prakasa Rao [12]) providing conditional versions of known results such as the generalized Borel-Cantelli lemma, the generalized Kolmogorov inequality, and the generalized Hájek-Rényi inequalities. Counterexamples are available in the literature, proving that the conditional independence and conditional association are not equivalent to the corresponding unconditional concepts.

Following the notation introduced by Prakasa Rao [12], let \(E^{\mathcal{F}}(X) = E(X|\mathcal{F})\) and \(P^{\mathcal{F}}(A) = P(A|\mathcal{F})\) denote the conditional expectation and the conditional probability, respectively, where \(\mathcal{F}\) is a sub-σ-algebra of \(\mathcal{A}\). Furthermore, \(\operatorname{Cov}^{\mathcal {F}}(X,Y)\) denotes the conditional covariance of X and Y given \(\mathcal{F}\), where

$$\operatorname{Cov}^{\mathcal{F}}(X,Y) = E^{\mathcal{F}}(XY)-E^{\mathcal {F}}(X)E^{\mathcal{F}}(Y) , $$

whereas the conditional variance is defined as \(\operatorname{Var}^{\mathcal{F}} (X) = \operatorname{Cov}^{\mathcal{F}}(X,X)\).

The concept of conditional negative association was introduced by Roussas [11]. Let us recall its definition since it is related to the results presented further.

Definition 7

A finite collection of random variables \(X_{1}, \ldots, X_{n}\) is said to be conditionally negatively associated given \(\mathcal{F}\) (\(\mathcal{F}\)-NA) if

$$\operatorname{Cov}^{\mathcal{F}}\bigl(f(X_{i},i\in A), g(X_{j},j\in B)\bigr)\leq0 \quad \mbox{a.s.} $$

for any disjoint subsets A and B of \({1, 2, \ldots, n}\) and for any real-valued componentwise nondecreasing functions f, g on \(\mathbb{R}^{|A|}\) and \(\mathbb{R}^{|B|}\), respectively, where \(|A| = \operatorname{card}(A)\), provided that the covariance is defined. An infinite collection is conditionally negatively associated given \(\mathcal{F}\) if every finite subcollection is \(\mathcal{F}\)-NA.

As mentioned earlier, it can be shown that the concepts of negative association and conditional negative association are not equivalent. See, for example, Yuan et al. [13], where various of counterexamples are given.

2.1 Conditional acceptability

In this paper, we define the concept of conditional acceptability by combining the concept of conditioning on a σ-algebra and the concept of acceptability. In particular, conditioning is combined with the concept of wide acceptability. We therefore give the following definition.

Definition 8

A finite family of random variables \(X_{1}, X_{2},\ldots,X_{n}\) is said to be \(\mathcal{F}\)-acceptable for \(\delta>0\) if \(E ( \exp(\delta|X_{i}|) ) <\infty\) for all i and, for any \(|\lambda|< \delta\), there exist positive numbers \(g(n)\), \(n\geq1\), such that

$$E^{\mathcal{F}} \Biggl(\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr) \Biggr)\leq g(n)\prod_{i=1}^{n}E^{\mathcal{F}} \bigl(\exp (\lambda X_{i} ) \bigr) \quad \mbox{a.s.} $$

A sequence of random variables \(\{X_{n}, n\in\mathbb{N}\}\) is \(\mathcal {F}\)-acceptable for \(\delta>0\) if every finite subfamily is \(\mathcal {F}\)-acceptable for \(\delta>0\).

Remark 9

The definition of \(\mathcal{F}\)-acceptability allows for the quantity λ to be a random object. Thus, as a result, if the random variables \(X_{1}, X_{2},\ldots,X_{n}\) are \(\mathcal{F}\)-acceptable for \(\delta >0\), then

$$E^{\mathcal{F}} \Biggl(\exp \Biggl(\lambda\sum_{i=1}^{n}X_{i} \Biggr) \Biggr)\leq g(n)\prod_{i=1}^{n}E^{\mathcal{F}} \bigl(\exp (\lambda X_{i} ) \bigr)\quad \mbox{a.s.} $$

for an \(\mathcal{F}\)-measurable random variable λ such that \(|\lambda|<\delta\) a.s.

Remark 10

It can be easily verified that if random variables \(X_{1},\ldots,X_{n}\) are \(\mathcal{F}\)-acceptable, then the random variables \(X_{1}-E^{\mathcal {F}}(X_{1}), X_{2}-E^{\mathcal{F}}(X_{2}),\ldots,X_{n}-E^{\mathcal{F}}(X_{n})\) are also \(\mathcal{F}\)-acceptable, and \(- X_{1} , - X_{2} , \ldots, - X_{n}\) are also \(\mathcal{F}\)-acceptable.

The random variables given in the following example satisfy the definition of \(\mathcal{F}\)-acceptability.

Example 11

Let \(\Omega= \{1,2,3,4\}\) with \(P(\{i\})=\frac{1}{4}\). Define the events

$$A_{1} = \{1,2\}\quad \mbox{and}\quad A_{2} = \{2,3\} $$

and the random variables

$$X_{1} = I_{A_{1}}\quad \mbox{and}\quad X_{2} = I_{A_{2}}. $$

Let \(B=\{1\}\) and \(\mathcal{F} = \{\Omega, B, B^{c}, \emptyset\}\). Then

$$\begin{aligned}& E^{\mathcal{F}}\bigl[e^{\lambda X_{1}}\bigr] = \textstyle\begin{cases} e^{\lambda} ,& \omega\in B, \\ \frac{1}{3}e^{\lambda}+\frac{2}{3},& \omega\in B^{c} , \end{cases}\displaystyle \\& E^{\mathcal{F}}\bigl[e^{\lambda X_{2}}\bigr] = \textstyle\begin{cases} 1, & \omega\in B, \\ \frac{1}{3}+\frac{2}{3}e^{\lambda} ,& \omega\in B^{c}, \end{cases}\displaystyle \end{aligned}$$

and

$$E^{\mathcal{F}}\bigl[e^{\lambda(X_{1}+X_{2})}\bigr] = \textstyle\begin{cases} e^{\lambda}, & \omega\in B, \\ \frac{1}{3}(e^{2\lambda}+e^{\lambda}+1), & \omega\in B^{c}. \end{cases} $$

For the random variables to satisfy the definition of \(\mathcal {F}\)-acceptability, the following inequality needs to be valid:

$$E^{\mathcal{F}}\bigl[e^{\lambda(X_{1}+X_{2})}\bigr]\leq g(2)E^{\mathcal {F}} \bigl[e^{\lambda X_{1}}\bigr]E^{\mathcal{F}}\bigl[e^{\lambda X_{2}}\bigr] $$

for all \(\lambda\in(-\delta,\delta)\) and \(g(2)>0\). For the case where \(\omega\in B\), this inequality is satisfied for all \(\lambda\in\mathbb {R}\) if \(g(2)\) is chosen to be any number such that \(g(2)\geq1\). For the case where \(\omega\in B^{c}\), the inequality is equivalent to

$$\bigl(3-2g(2)\bigr)e^{2\lambda}+\bigl(3-5g(2)\bigr)e^{\lambda}+ \bigl(3-2g(2)\bigr)\leq0. $$

Observe that the last inequality is satisfied for all \(\lambda\in \mathbb{R}\) if \(g(2)\geq\frac{3}{2}\). Thus, the random variables are \(\mathcal{F}\)-acceptable for any real \(\lambda\in(-\delta,\delta)\) where \(\delta>0\) and \(g(2)\geq\frac{3}{2}\). Furthermore, it is worth mentioning that the random variables \(\{X_{1},X_{2}\}\) are not \(\mathcal{F}\)-NA.

In the case where \(\mathcal{F}\) is chosen to be the trivial σ-algebra, that is, \(\mathcal{F} = \{\emptyset,\Omega\}\), the definition of \(\mathcal{F}\)-acceptability reduces to the definition of unconditional wide acceptability. The converse statement cannot always be true, and this can be proven via the following counterexample, showing that the concepts of \(\mathcal{F}\)-acceptability and acceptability are not equivalent.

Example 12

Let \(\Omega= \{1,2,3,4,5,6\}\), and let \(P(\{i\}) = \frac {1}{6}\). Define the events \(A_{1}\) and \(A_{2}\) by

$$A_{1} = \{1,2,3,4\}\quad \mbox{and}\quad A_{2} = \{2,3,4,5 \} $$

and the random variables \(X_{1}\) and \(X_{2}\) by

$$X_{1} = I_{A_{1}}\quad \mbox{and}\quad X_{2} = I_{A_{2}}. $$

Let \(B = \{6\}\), and let \(\mathcal{F} = \{\Omega, B, B^{c},\emptyset\}\) be the sub-σ-algebra generated by B. Yuan et al. [13] proved that \(\{X_{1},X_{2}\}\) are \(\mathcal{F}\)-NA. By proposition P1 of the same paper it follows that, for all \(\lambda \in\mathbb{R}\), \(\{e^{\lambda X_{1}},e^{\lambda X_{2}}\}\) are \(\mathcal{F}\)-NA, and therefore \(\{X_{1},X_{2}\}\) are \(\mathcal {F}\)-acceptable for \(g(2) =1\).

Note that

$$\begin{aligned}& E\bigl[\exp{\lambda(X_{1}+X_{2})}\bigr] = e^{0\lambda} \biggl(\frac{1}{6} \biggr)+e^{1\lambda} \biggl(\frac{2}{6} \biggr)+e^{2\lambda} \biggl(\frac {3}{6} \biggr) = \frac{1}{6} \bigl(1+2e^{\lambda}+3e^{2\lambda} \bigr), \\& E\bigl[e^{\lambda X_{1}}\bigr]E\bigl[e^{\lambda X_{2}}\bigr] = \biggl(e^{0\lambda} \biggl(\frac {2}{6} \biggr)+e^{1\lambda} \biggl( \frac{4}{6} \biggr) \biggr)^{2} = \frac{1}{9} \bigl(1+2e^{\lambda} \bigr)^{2}, \end{aligned}$$

and

$$\begin{aligned} E\bigl[\exp{\lambda(X_{1}+X_{2})}\bigr]-E \bigl[e^{\lambda X_{1}}\bigr]E\bigl[e^{\lambda X_{2}}\bigr] =& \frac {1}{6} \bigl(1+2e^{\lambda}+3e^{2\lambda} \bigr)-\frac{1}{9} \bigl(1+2e^{\lambda} \bigr)^{2} \\ =& \frac{1}{18}-\frac{1}{9}e^{\lambda}+\frac{1}{18}e^{2\lambda}. \end{aligned}$$

For \(\lambda= \log2\), this difference is positive, proving that \(\{X_{1} , X_{2} \}\) do not satisfy the definition of acceptability.

Let X be a random variable, \(X\geq0\) a.s., and let ϵ be an \(\mathcal{F}\)-measurable random variable such that \(\epsilon>0\) a.s. It is known that

$$ P^{\mathcal{F}}(X\geq\epsilon) \leq\frac{E^{\mathcal{F}}(X)}{\epsilon } \quad \mbox{a.s.} $$
(1)

This inequality is a conditional version of Markov’s inequality and is an tool for obtaining the results of this paper.

It is well known that exponential inequalities played an important role in obtaining asymptotic results for sums of independent random variables. Classical exponential inequalities were obtained, for example, by Bernstein, Hoeffding, Kolmogorov, Fuk, and Nagaev (see the monograph of Petrov [14]). A crucial step in proving an exponential inequality is the use of an inequality like that in Definition 2. Next, we provide several exponential inequalities for \(\mathcal{F}\)-acceptable random variables.

The following Hoeffding-type inequality is obtained by Yuan and Xie [16].

Lemma 13

Assume that \(P(a\leq X\leq b) = 1\), where a and b are \(\mathcal{F}\)-measurable random variables such that \(a< b\) a.s. Then

$$E^{\mathcal{F}} \bigl[\exp \bigl(\lambda\bigl(X-E^{\mathcal{F}}(X)\bigr) \bigr) \bigr]\leq\exp \biggl(\frac{1}{8}\lambda^{2}(b-a)^{2} \biggr)\quad \textit{a.s.} $$

for any \(\mathcal{F}\)-measurable random variable λ.

The result that follows is a conditional version of the well-known Hoeffding inequality (Hoeffding [15], Theorem 2). Similar results were proven by Shen et al. [3], Theorem 2.3, for acceptable random variables and by Yuan and Xie [16], Theorem 1, for conditionally linearly negatively quadrant dependent random variables. Our result improves Theorem 1 of Yuan and Xie [16].

Theorem 14

Let \(X_{1}, X_{2},\ldots,X_{n}\) be \(\mathcal{F}\)-acceptable random variables for \(\delta>0\) such that \(P(a_{i}\leq X_{i}\leq b_{i})=1\), \(i=1,2,\ldots,n\), where \(a_{i}\) and \(b_{i}\) are \(\mathcal {F}\)-measurable random variables such that \(a_{i}< b_{i}\) a.s. for all i. Then for an \(\mathcal{F}\)-measurable ϵ with \(0<\epsilon<\frac {\delta}{4}\sum_{i=1}^{n}(b_{i}-a_{i})^{2}\) a.s., we have

$$P^{\mathcal{F}}\bigl(\bigl|S_{n}-E^{\mathcal{F}}(S_{n})\bigr|\geq \epsilon\bigr)\leq2g(n)\exp \biggl[-\frac{2\epsilon^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}} \biggr]\quad \textit{a.s.}, $$

where \(S_{n} = \sum_{i=1}^{n}X_{i}\).

Proof

Let λ be an \(\mathcal{F}\)-measurable random variable such that \(0<\lambda<\delta\) a.s. Then by the conditional version of Markov’s inequality

$$\begin{aligned} A \equiv&P^{\mathcal{F}}\bigl(S_{n} - E^{\mathcal{F}}(S_{n}) \geq\epsilon\bigr) \\ =&P^{\mathcal{F}}\bigl(\exp\bigl(\lambda\bigl(S_{n} - E^{\mathcal{F}}(S_{n})\bigr)\bigr)\geq e^{\lambda\epsilon}\bigr) \\ \leq&\frac{E^{\mathcal{F}}[\exp(\lambda(S_{n}-E^{\mathcal {F}}(S_{n})))]}{e^{\lambda\epsilon}} \\ =&\frac{E^{\mathcal{F}}[\exp(\lambda\sum_{i=1}^{n}(X_{i}-E^{\mathcal {F}}(X_{i})))]}{e^{\lambda\epsilon}} \\ \leq&\frac{g(n)}{e^{\lambda\epsilon}}\prod_{i=1}^{n}E^{\mathcal {F}} \bigl[\exp\bigl(\lambda\bigl(X_{i}-E^{\mathcal{F}}(X_{i}) \bigr)\bigr)\bigr] \\ \leq&\frac{g(n)}{e^{\lambda\epsilon}}\prod_{i=1}^{n} \exp \biggl[\frac {1}{8}\lambda^{2}(b_{i}-a_{i})^{2} \biggr] \\ =&g(n)\exp \Biggl[\frac{\lambda^{2}}{8}\sum_{i=1}^{n}(b_{i}-a_{i})^{2}- \lambda \epsilon \Biggr]\quad \mbox{a.s.}, \end{aligned}$$

where the first inequality follows by applying (1), and the second because of the \(\mathcal{F}\)-acceptability property for \(\lambda \in(0,\delta)\). Since the minimum of the function \(f(x) = ax^{2}+bx\) is attained at \(x=-\frac{b}{2a}\), the minimum of the above expression is attained at \(\lambda= \frac{4 \epsilon}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}\). Then

$$\begin{aligned} \begin{aligned} A&\leq g(n) \exp \biggl[\frac{16\epsilon^{2}}{ (\sum_{i=1}^{n}(b_{i}-a_{i})^{2} )^{2}}\frac{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}{8}-\frac {4\epsilon^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}} \biggr] \\ &=g(n)\exp \biggl[-\frac{2\epsilon^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}} \biggr] \quad \mbox{a.s.} \end{aligned} \end{aligned}$$

Given that \(-X_{1},\ldots,-X_{n}\) are also \(\mathcal{F}\)-acceptable, we also have that

$$P^{\mathcal{F}}\bigl[-\bigl(S_{n}-E^{\mathcal{F}} ( S_{n} ) \bigr)\geq\epsilon\bigr]\leq g(n)\exp \biggl[-\frac{2\epsilon^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}} \biggr] \quad \mbox{a.s.} $$

The desired result follows by combining the last two expressions. □

The result that follows is the conditional version of Theorem 2.1 of Shen et al. [3].

Theorem 15

Let \(X_{1},\ldots,X_{n}\) be \(\mathcal{F}\)-acceptable random variables for \(\delta>0\). Assume that \(E^{\mathcal{F}}(X_{i})=0\) and that \(E^{\mathcal {F}}(X_{i}^{2})\) is finite for any i. Let \(B_{n}^{2} = \sum_{i=1}^{n}E^{\mathcal{F}}(X_{i}^{2})\). Assume that \(|X_{i}|\leq cB_{n}\) a.s., where c is a positive \(\mathcal{F}\)-measurable random variable. Then

$$P^{\mathcal{F}}[S_{n}\geq\epsilon B_{n}]\leq \textstyle\begin{cases} g(n)\exp [-\frac{\epsilon^{2}}{2}(1-\frac{\epsilon c}{2}) ] \quad \textit{a.s.} & \textit{for }\epsilon c \leq1\textit{ and }\epsilon< \delta B_{n}, \\ g(n)\exp [-\frac{\epsilon}{4c} ]\quad \textit{a.s.} & \textit{for }\epsilon c \geq1\textit{ and }\frac{1}{c B_{n}}< \delta \end{cases} $$

for any positive \(\mathcal{F}\)-measurable random variable ϵ.

Proof

Let t be a positive \(\mathcal{F}\)-measurable random variable such that \(tcB_{n}\leq1\). Note that, for \(k\geq2\),

$$\begin{aligned} E^{\mathcal{F}}\bigl(X_{i}^{k}\bigr) \leq&E^{\mathcal{F}} \bigl[|X_{i}|^{k-2}X_{i}^{2}\bigr] \\ \leq&(cB_{n})^{k-2}E^{\mathcal{F}}\bigl(X_{i}^{2} \bigr) \quad \mbox{a.s.} \end{aligned}$$

Then

$$\begin{aligned} E^{\mathcal{F}}\bigl(e^{tX_{i}}\bigr) =&1+E^{\mathcal{F}}(X_{i})+ \sum_{k=2}^{\infty }\frac{t^{k}}{k !}E^{\mathcal{F}} \bigl(X_{i}^{k}\bigr) \\ \leq& 1+\frac{t^{2}}{2}E^{\mathcal{F}}\bigl(X_{i}^{2} \bigr) \biggl[1+\frac {t}{3}cB_{n}+\frac{t^{2}}{3\cdot4}(cB_{n})^{2}+ \cdots \biggr] \\ \leq& 1+\frac{t^{2}}{2}E^{\mathcal{F}}\bigl(X_{i}^{2} \bigr) \biggl[1+\frac {t}{3}cB_{n}+\frac{tcB_{n}}{3\cdot4}+ \frac{tcB_{n}}{3\cdot4\cdot5}+\cdots \biggr] \\ \leq& 1+\frac{t^{2}}{2}E^{\mathcal{F}}\bigl(X_{i}^{2} \bigr) \biggl(1+\frac {tcB_{n}}{2} \biggr) \\ \leq& \exp \biggl[\frac{t^{2}}{2}E^{\mathcal{F}}\bigl(X_{i}^{2} \bigr) \biggl(1+\frac {tcB_{n}}{2} \biggr) \biggr]\quad \mbox{a.s.}, \end{aligned}$$

where the second inequality follows because \(tcB_{n} \leq1\) a.s., and the third because of the fact that \(\frac{1}{3}+\frac{1}{3\cdot4}+\frac {1}{3\cdot4\cdot5}+\cdots\leq\frac{1}{2}\). Hence, for \(|t|<\delta\),

$$\begin{aligned} E^{\mathcal{F}}\bigl(e^{tS_{n}}\bigr) =& E^{\mathcal{F}} \Biggl(\prod _{i=1}^{n}e^{tX_{i}} \Biggr) \\ \leq&g(n)\prod_{i=1}^{n}E^{\mathcal{F}} \bigl(e^{tX_{i}}\bigr) \\ \leq&g(n)\prod_{i=1}^{n}\exp \biggl[ \frac{t^{2}}{2}E^{\mathcal {F}}\bigl(X_{i}^{2}\bigr) \biggl(1+\frac{tcB_{n}}{2} \biggr) \biggr] \\ =&g(n)\exp \Biggl[\frac{t^{2}}{2}\sum_{i=1}^{n}E^{\mathcal{F}} \bigl(X_{i}^{2}\bigr) \biggl(1+\frac{tcB_{n}}{2} \biggr) \Biggr] \\ =&g(n)\exp \biggl[\frac{t^{2}}{2}B_{n}^{2} \biggl(1+ \frac{tcB_{n}}{2} \biggr) \biggr]\quad \mbox{a.s.}, \end{aligned}$$

where the first inequality follows by the \(\mathcal{F}\)-acceptability property. Note that since ϵ, \(B_{n}\), and c are positive \(\mathcal{F}\)-measurable random variables, the conditional Markov inequality can be applied, and by using the above calculations we have that, for \(0< t<\delta\),

$$\begin{aligned} P^{\mathcal{F}}(S_{n}\geq\epsilon B_{n}) =&P^{\mathcal {F}} \bigl(e^{tS_{n}}\geq e^{t\epsilon B_{n}}\bigr) \\ \leq&\frac{E^{\mathcal{F}}[e^{tS_{n}}]}{e^{t\epsilon B_{n}}} \\ \leq&g(n)\frac{\exp [\frac{t^{2}}{2}B_{n}^{2} (1+\frac {tcB_{n}}{2} ) ]}{e^{t\epsilon B_{n}}} \\ =&g(n)\exp \biggl[-t\epsilon B_{n}+ \frac{t^{2}}{2}B_{n}^{2} \biggl(1+\frac{tcB_{n}}{2} \biggr) \biggr]\quad \mbox{a.s.} \end{aligned}$$
(2)

If \(\epsilon c\leq1\) a.s., then put \(t = \frac{\epsilon}{B_{n}}\) a.s. Therefore, \(0< t<\delta\) is equivalent to \(\epsilon<\delta B_{n}\) a.s. Then we obtain from (2) that

$$P^{\mathcal{F}}(S_{n}\geq\epsilon B_{n})\leq g(n)\exp \biggl[-\frac{\epsilon ^{2}}{2} \biggl(1-\frac{\epsilon c}{2} \biggr) \biggr]\quad \mbox{a.s.} $$

If \(\epsilon c\geq1\) a.s., then put \(t=\frac{1}{cB_{n}}\) a.s. Then we need \(\frac{1}{cB_{n}}<\delta\) a.s. In this case, from (2) we obtain

$$P^{\mathcal{F}}(S_{n}\geq\epsilon B_{n})\leq g(n)\exp \biggl[-\frac{\epsilon }{c}+\frac{3}{4c^{2}} \biggr]\leq g(n)\exp \biggl[- \frac{\epsilon}{4c} \biggr]\quad \mbox{a.s.} $$

 □

Theorem 16

Let \(X_{1},\ldots,X_{n}\) be \(\mathcal{F}\)-acceptable random variables for \(\delta>0\). Assume that \(E^{\mathcal{F}}(X_{i}) = 0\) a.s. and that there is an \(\mathcal{F}\)-measurable random variable b such that \(|X_{i}|\leq b\) a.s. for all i. Define \(B_{n}^{2}=\sum_{i=1}^{n}E^{\mathcal{F}} (X_{i}^{2} )\). Then, for any positive \(\mathcal{F}\)-measurable random variable ϵ with \(\frac {\epsilon}{B_{n}^{2}+\frac{b}{3}\epsilon}<\delta\), we have

$$P^{\mathcal{F}}(S_{n}\geq\epsilon)\leq g(n)\exp \biggl[- \frac{\epsilon ^{2}}{2B_{n}^{2}+\frac{2}{3}b\epsilon} \biggr] \quad \textit{a.s.} $$

and

$$P^{\mathcal{F}}\bigl(\vert S_{n}\vert \geq\epsilon\bigr)\leq2 g(n)\exp \biggl[-\frac{\epsilon ^{2}}{2B_{n}^{2}+\frac{2}{3}b\epsilon} \biggr] \quad \textit{a.s.} $$

Proof

For \(t>0\), by using Taylor’s expansion we have that

$$\begin{aligned} E^{\mathcal{F}}\bigl[\exp(tX_{i})\bigr] =&E^{\mathcal{F}} \Biggl[ \sum_{j=0}^{\infty }\frac{t^{j}X_{i}^{j}}{j !} \Biggr] \\ =&\sum_{j=0}^{\infty}\frac{t^{j}}{j !}E^{\mathcal{F}} \bigl[X_{i}^{j}\bigr] \\ =&1+0+\sum_{j=2}^{\infty}\frac{t^{j}}{j !}E^{\mathcal{F}} \bigl[X_{i}^{j}\bigr] \\ =&1+\frac{t^{2}\sigma_{i}^{2}}{2}\sum_{j=2}^{\infty} \frac{t^{j-2}}{j !}\frac {E^{\mathcal{F}}[X_{i}^{j}]}{\frac{1}{2}\sigma_{i}^{2}}\quad \mbox{if }\sigma_{i}^{2} = E^{\mathcal{F}}\bigl[X_{i}^{2}\bigr]\mbox{ is nonzero} \\ =&1+\frac{t^{2}\sigma_{i}^{2}}{2}F_{i}(t) \\ \leq&\exp \biggl[\frac{t^{2}\sigma_{i}^{2}}{2}F_{i}(t) \biggr] \quad \mbox{a.s.}, \end{aligned}$$

where

$$F_{i}(t) = \sum_{j=2}^{\infty} \frac{t^{j-2}}{j !}\frac{E^{\mathcal {F}}[X_{i}^{j}]}{\frac{1}{2}\sigma_{i}^{2}}\quad \mbox{a.s.} $$

Note that on the set where \(E^{\mathcal{F}}[X_{i}^{2}] = 0\) a.s., we have the obvious upper bound \(E^{\mathcal{F}}[\exp(tX_{i})]\leq1\) a.s. since \(|E^{\mathcal{F}}[X_{i}^{j}]|\leq E^{\mathcal{F}}[X_{i}^{2}]b^{j-2}\) a.s.

Define \(c = \frac{b}{3}\) and \(M_{n} = 1+\frac{b\epsilon}{3B_{n}^{2}}=1+c\frac {\epsilon}{B_{n}^{2}}\) a.s. By choosing \(t>0\) such that \(t<\frac{1}{c}\) we have that

$$tc\leq\frac{M_{n}-1}{M_{n}}=\frac{c\epsilon}{c\epsilon+B_{n}^{2}}\quad \mbox{a.s.} $$

Observe that, for \(j\geq2\),

$$\begin{aligned} E^{\mathcal{F}}\bigl[|X_{i}|^{j}\bigr] \leq&E^{\mathcal{F}}\bigl[X_{i}^{2}b^{j-2}\bigr] \\ =&\sigma_{i}^{2}b^{j-2} \\ \leq&\sigma_{i}^{2}(3c)^{j-2} \\ \leq&\frac{1}{2}\sigma_{i}^{2}c^{j-2}j ! \quad \mbox{a.s.} \end{aligned}$$

Therefore,

$$\begin{aligned} F_{i}(t) \leq& \sum_{j=2}^{\infty} \frac{t^{j-2}}{j !}\frac{\frac {1}{2}\sigma_{i}^{2}c^{j-2}j !}{\frac{1}{2}\sigma_{i}^{2}} \\ =&\sum_{j=2}^{\infty} (tc)^{j-2} \\ =&\frac{1}{1-tc} \\ \leq&M_{n} \quad \mbox{a.s.} \end{aligned}$$

By applying the conditional Markov inequality and the \(\mathcal {F}\)-acceptability of the sequence \(\{X_{n}, n\in\mathbb{N}\}\) for \(0< t<\delta\) a.s. we have

$$\begin{aligned} P^{\mathcal{F}}(S_{n}\geq\epsilon) =&P^{\mathcal {F}} \bigl(e^{tS_{n}}\geq e^{t\epsilon}\bigr) \\ \leq&\frac{E^{\mathcal{F}}[e^{tS_{n}}]}{e^{t\epsilon}} \\ \leq&\frac{1}{e^{t\epsilon}}g(n)\prod_{i=1}^{n}E^{\mathcal {F}} \bigl[e^{tX_{i}}\bigr] \\ \leq&g(n)\exp \biggl[-t\epsilon+\frac{t^{2}B_{n}^{2}}{2}M_{n} \biggr]\quad \mbox{a.s.} \end{aligned}$$
(3)

The minimum is obtained at \(t_{0}=\frac{\epsilon}{B_{n}^{2}M_{n}}\) a.s. It can be verified that \(t_{0} = \frac{\epsilon}{c\epsilon+B_{n}^{2}}\), and therefore the condition \(tc<1\) a.s. is satisfied. Moreover, \(t_{0}c = \frac{c\epsilon}{c\epsilon+B_{n}^{2}} = \frac{M_{n}-1}{M_{n}}\). So the above calculations show that the choice of \(t_{0}=\frac{\epsilon}{B_{n}^{2}M_{n}}\) a.s. is valid. Furthermore, we have assumed that \(0< t<\delta\) a.s., and for the particular choice of \(t_{0}\) this restriction leads to \(\frac {\epsilon}{B_{n}^{2}M_{n}}<\delta\), which is equivalent to

$$\frac{\epsilon}{B_{n}^{2}+c\epsilon}= \frac{\epsilon}{B_{n}^{2}+\frac {b}{3}\epsilon}< \delta \quad \mbox{a.s.}, $$

which is satisfied because of the assumption of the theorem. By substituting \(t_{0} =\epsilon/ B_{n}^{2}M_{n}\) into (3) we obtain

$$\begin{aligned} P^{\mathcal{F}}(S_{n}\geq\epsilon) \leq& g(n)\exp \biggl[- \frac{\epsilon ^{2}}{B_{n}^{2}M_{n}}+\frac{\frac{\epsilon^{2}}{(B_{n}^{2}M_{n})^{2}}B_{n}^{2}M_{n}}{2} \biggr] \\ =&g(n)\exp \biggl[-\frac{1}{2}\frac{\epsilon^{2}}{B_{n}^{2}M_{n}} \biggr] \\ =&g(n)\exp \biggl[-\frac{\epsilon^{2}}{2B_{n}^{2}+\frac{2}{3}b\epsilon} \biggr]\quad \mbox{a.s.} \end{aligned}$$

Since \(\{-X_{n},n\in\mathbb{N}\}\) is again a sequence of \(\mathcal {F}\)-acceptable random variables, we obtain

$$P^{\mathcal{F}}(S_{n}\leq-\epsilon)=P^{\mathcal{F}}(-S_{n} \geq\epsilon )\leq g(n)\exp \biggl[-\frac{\epsilon^{2}}{2B_{n}^{2}+\frac{2}{3}b\epsilon } \biggr]\quad \mbox{a.s.} $$

 □

The probability inequalities presented above were proven under the assumption of bounded random variables. The result that follows provides a probability inequality under a moment condition.

Theorem 17

Let \(\{X_{n},n\in\mathbb{N}\}\) be a sequence of \(\mathcal{F}\)-acceptable random variables for \(\delta>0\), and let \(\{ c_{n},n\in\mathbb{N}\}\) be a sequence of positive \(\mathcal {F}\)-measurable random variables with \(C_{n} = \sum_{i=1}^{n}c_{i}\) for all \(n=1,2,\ldots\) . Assume that there is a positive \(\mathcal {F}\)-measurable random variable T with \(T\leq\delta\) a.s. such that, for \(|t|\leq T\) and fixed \(n\geq1\),

$$ E^{\mathcal{F}}e^{tX_{k}} \leq e^{\frac {1}{2}c_{k}t^{2}} \quad \textit{a.s.}, k=1,2,\ldots,n. $$
(4)

Then, for any positive \(\mathcal{F}\)-measurable random variable ϵ,

$$P^{\mathcal{F}}\bigl(\vert S_{n}\vert \geq\epsilon\bigr)\leq \textstyle\begin{cases} 2g(n)e^{-\frac{\epsilon^{2}}{2C_{n}}} \quad \textit{a.s.}, & 0< \epsilon\leq C_{n}T, \\ 2g(n)e^{-\frac{T\epsilon}{2}} \quad \textit{a.s.}, & \epsilon\geq C_{n}T. \end{cases} $$

Proof

The proof follows similar arguments as those presented in Petrov [14]. Let \(0< t\leq T\) a.s. Then by applying the conditional Markov inequality and the property of \(\mathcal{F}\)-acceptability we have that

$$\begin{aligned} \begin{aligned} P^{\mathcal{F}}(S_{n}\geq\epsilon)&\leq e^{-t\epsilon}E^{\mathcal {F}} \bigl[e^{tS_{n}} \bigr] \\ &\leq g(n)e^{-t\epsilon}\prod_{i=1}^{n}E^{\mathcal{F}} \bigl[e^{tX_{k}} \bigr] \\ &\leq g(n)e^{-t\epsilon+\frac{C_{n}t^{2}}{2}} \quad \mbox{a.s.} \end{aligned} \end{aligned}$$

The aim is to minimize the above bound. Observe that, for \(0< \epsilon \leq C_{n}T\) a.s., the above bound is minimized for \(t= \frac{\epsilon }{C_{n}}\), which satisfies the condition \(0< t\leq T\) a.s. Therefore, by letting \(t= \frac{\epsilon}{C_{n}}\) we have that

$$ P^{\mathcal{F}}(S_{n}\geq\epsilon)\leq g(n)e^{-\frac {\epsilon^{2}}{2C_{n}}} \quad \mbox{a.s.} $$
(5)

Now, suppose that \(\epsilon\geq C_{n}T\). Then the minimum is obtained for \(t=T\), and the above bound becomes of the form

$$ P^{\mathcal{F}}(S_{n}\geq\epsilon)\leq g(n)e^{-\frac {T\epsilon}{2}} \quad \mbox{a.s.} $$
(6)

We see that \(X_{1},X_{2},\ldots,X_{n}\) satisfy condition (4) for \(-T\leq t\leq0\) for some positive \(\mathcal{F}\)-measurable random variables T and \(\{c_{n},n\in\mathbb{N}\}\). So \(-X_{1},-X_{2},\ldots,-X_{n}\) are \(\mathcal{F}\)-acceptable random variables that satisfy condition (4) for \(0\leq t\leq T\), and therefore by applying inequalities (5) and (6) for \(-S_{n}\) we have that

$$ P^{\mathcal{F}}(S_{n}\leq-\epsilon)\leq g(n)e^{-\frac {\epsilon^{2}}{2C_{n}}} \quad \mbox{a.s. if } 0\leq\epsilon C_{n}T $$
(7)

and

$$ P^{\mathcal{F}}(S_{n}\leq-\epsilon)\leq g(n)e^{-\frac {T\epsilon}{2}}\quad \mbox{a.s. if } \epsilon\geq C_{n}T. $$
(8)

The desired result follows by combining (5) with (7) and (6) with (8). □

Remark 18

Since \(\{X_{n}, n\in\mathbb{N}\}\) is a sequence of \(\mathcal {F}\)-acceptable random variables with \(g(n) \equiv1\), where \(\mathcal {F}\) is the trivial σ-algebra, the above result is reduced to the result of Corollary 2.1 of Shen and Wu [17].

2.2 Conditional complete convergence

Complete convergence results are well known for independent random variables (see, e.g., Gut [18]). The classical results of Hsu, Robbins, Erdős, Baum, and Katz were extended to certain dependent sequences. Using the results of Section 2.1, we can show the complete convergence for the partial sum of \(\mathcal{F}\)-acceptable random variables under various assumptions. We will need the following definition of conditional complete convergence (see Christofides and Hadjikyriakou [19] for details).

Definition 19

A sequence of random variables \(\{X_{n},n\in\mathbb{N}\}\) is said to converge completely given \(\mathcal{F}\) to a random variable X if

$$\sum_{i=1}^{\infty}P^{\mathcal{F}}\bigl( \vert X_{i}-X\vert >\epsilon\bigr)< \infty \quad \mbox{a.s.} $$

for any \(\mathcal{F}\)-measurable random variable ϵ such that \(\epsilon>0\) a.s.

The following set of sequences, which serves purposes of brevity, was first defined by Shen et al. [3]:

$$\mathcal{H} = \Biggl\{ {b_{n}}: \sum_{n=1}^{\infty}h^{b_{n}}< \infty \mbox{ for every }0< h< 1 \Biggr\} . $$

The theorem that follows is a complete convergence theorem for ‘self-normalized’ sums.

Theorem 20

Let \(X_{1}, X_{2},\ldots\) be a sequence of \(\mathcal {F}\)-acceptable random variables for \(\delta>0\). Assume that \(E^{\mathcal{F}}(X_{i}) = 0\) a.s. and \(|X_{i}|\leq b\) a.s. for all i, where b is an \(\mathcal{F}\)-measurable random variable. Assume that \(g(n)< K\) a.s., where K is an a.s. finite \(\mathcal{F}\)-measurable random variable. Let \(B_{n}^{2} = \sum_{i=1}^{n}E^{\mathcal{F}} ( X_{i}^{2} )\) and assume that \(\{B_{n}^{2},n\in\mathbb{N}\}\in\mathcal {H}\) a.s. Then

$$\frac{S_{n}}{B_{n}^{2}}\textit{ converges completely to } 0\textit{ given } \mathcal{F}. $$

Proof

By applying the result of Theorem 16 for \(\frac{\epsilon }{1+\frac{b\epsilon}{3}}<\delta\) a.s. we have that

$$\begin{aligned} \sum_{n=1}^{\infty} P^{\mathcal{F}} \biggl( \frac{|S_{n}|}{B_{n}^{2}}\geq \epsilon \biggr) \leq& 2 \sum _{n=1}^{\infty} g(n)\exp \biggl(-\frac {B_{n}^{4}\epsilon^{2}}{2B_{n}^{2}+\frac{2}{3}bB_{n}^{2}\epsilon} \biggr) \\ =& 2 \sum_{n=1}^{\infty} g(n) \biggl(\exp \biggl(-\frac{\epsilon^{2}}{2+\frac {2}{3}\epsilon b} \biggr) \biggr)^{B_{n}^{2}} \\ < &\infty \quad \mbox{a.s.} \end{aligned}$$

because \(g(n)\leq K\) a.s. and \(\{B_{n}^{2},n\in\mathbb{N}\}\in\mathcal {H}\) a.s. □

Remark 21

In Theorem 20, the condition \(\{B_{n}^{2},n\in\mathbb {N}\}\in\mathcal{H}\) can be substituted by \(B_{n}^{2} = O(b_{n})\), where \({b_{n}}\in\mathcal{H}\) a.s. Then it can be proven that

$$\frac{S_{n}}{b_{n}}\mbox{ converges completely to }0\mbox{ given }\mathcal{F}. $$

Here \(b_{n}\) can be a sequence of \(\mathcal{F}\)-measurable random variables. In such case, the result is a conditional version of Theorem 3.1 of Shen et al. [3].

Next, we provide a result, which is a conditional version of Theorem 3.2 of Shen et al. [3].

Theorem 22

Let \(X_{1},X_{2},\ldots\) be a sequence of \(\mathcal{F}\)-acceptable random variables for \(\delta> 0\). Assume that \(|X_{i}|\leq c\) a.s., where c is an \(\mathcal{F}\)-measurable random variable such that \(c>0\) a.s. Assume that \(g(n)\leq K\) a.s., where K is an a.s. finite \(\mathcal {F}\)-measurable random variable. Moreover, let \({b_{n}}\in\mathcal{H}\) a.s., where \(b_{n}\) are \(\mathcal{F}\)-measurable. Then

$$\frac{S_{n}-E^{\mathcal{F}}(S_{n})}{\sqrt{nb_{n}}}\textit{ converges completely to }0\textit{ given } \mathcal{F}. $$

Proof

By applying Theorem 14 for \(0<\epsilon<c^{2}\delta\sqrt{\frac {n}{b_{n}}}\) a.s. we have that

$$\begin{aligned}& \sum_{n=1}^{\infty} P^{\mathcal{F}}\bigl(\bigl\vert S_{n}-E^{\mathcal{F}}(S_{n})\bigr\vert \geq (nb_{n})^{\frac{1}{2}}\epsilon\bigr) \\& \quad \leq 2 \sum _{n=1}^{\infty} g(n)\exp \biggl(-\frac{2nb_{n}\epsilon^{2}}{n(2c)^{2}} \biggr) \\& \quad = 2 \sum_{n=1}^{\infty} g(n) \biggl(\exp \biggl(-\frac{\epsilon ^{2}}{2c^{2}} \biggr) \biggr)^{b_{n}} \\& \quad < \infty\quad \mbox{a.s.} \end{aligned}$$

because \(g(n)\leq K\) a.s. and \(b_{n}\in\mathcal{H}\) a.s. □

Theorem 23

Let \(X_{1},X_{2},\ldots\) be a sequence of \(\mathcal{F}\)-acceptable random variables for \(\delta> 0\) such that all the assumptions of Theorem  17 are satisfied. If, for any \(\epsilon>0\) a.s.,

$$\sum_{n=1}^{\infty}g(n)e^{-\frac{b_{n}^{2}\epsilon^{2}}{2C_{n}}}< \infty \quad \textit{a.s.}\quad \textit{and}\quad \sum_{n=1}^{\infty}g(n)e^{-\frac{Tb_{n}\epsilon }{2}}< \infty \quad \textit{a.s.}, $$

where \(\{b_{n},n\in\mathbb{N}\}\) is a sequence of positive \(\mathcal {F}\)-measurable random variables, then

$$\frac{S_{n}}{b_{n}}\textit{ converges completely to }0\textit{ given } \mathcal{F}. $$

Proof

By applying Theorem 17 we have

$$\begin{aligned} \sum_{n=1}^{\infty}P^{\mathcal{F}} \biggl( \frac{1}{b_{n}}|S_{n}|\geq\epsilon \biggr) \leq&2\sum _{n=1}^{\infty}g(n)e^{-\frac{b_{n}^{2}\epsilon ^{2}}{2C_{n}}}+2\sum _{n=1}^{\infty}g(n)e^{-\frac{Tb_{n}\epsilon}{2}} \\ < &\infty\quad \mbox{a.s.} \end{aligned}$$

 □

The theorem that follows gives a conditional exponential inequality for the partial sum of \(\mathcal{F}\)-acceptable random variables, under a moment condition, which, in the unconditional case is a condition appearing very frequently in large deviation results (see, e.g., Nagaev [20] and Teicher [21]). It also appears as condition (3.3) of Theorem 3.3 of Shen et al. [3]. However, the bound provided here allows us to prove the complete convergence, and in the unconditional case, under assumptions different from those of Theorem 3.3 of Shen et al. [3].

Theorem 24

Let \(X_{1},X_{2},\ldots\) be a sequence of \(\mathcal{F}\)-acceptable random variables for \(\delta>0\). Assume that \(E^{\mathcal{F}}(X_{i}) = 0\) and let \(\sigma_{i}^{2} = E^{\mathcal{F}}(X_{i}^{2})\) be a.s. finite. Let \(B_{n}^{2} = \sum_{i=1}^{n}\sigma_{i}^{2}\). Assume that there exists an a.s. positive and a.s. finite \(\mathcal{F}\)-measurable random variable H such that

$$\bigl\vert E^{\mathcal{F}}\bigl(X_{i}^{m}\bigr)\bigr\vert \leq\frac{m !}{2}\sigma_{i}^{2}H^{m-2}, \quad \forall i\textit{ and } m\geq2 \textit{ a.s.} $$
  1. (i)

    If \(\frac{1}{H} [ 1 - \sqrt{\frac{B_{n}^{2}}{2Hx + B_{n}^{2}}} ]<\delta\), then for an a.s. positive \(\mathcal{F}\)-measurable random variable x, we have

    $$P^{\mathcal{F}} \Biggl( \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert \geq x \Biggr) \leq2g(n) \exp \biggl[ - \frac{1}{2 H^{2}} { \Bigl( \sqrt{2Hx + B_{n}^{2}} - \sqrt{B_{n}^{2}} \Bigr)}^{2} \biggr] \quad \textit{a.s.} $$
  2. (ii)

    If \(g(n)\leq K\) a.s. for all n, where K is a.s. finite and \(\{ B_{n}^{2}\}\in\mathcal{H}\) a.s., then

    $$\frac{S_{n}}{B_{n}^{2}}\textit{ converges completely to }0\textit{ given } \mathcal{F}. $$

Proof

$$\begin{aligned} E^{\mathcal{F}}\bigl[\exp(tX_{i})\bigr] =&E^{\mathcal{F}} \biggl[1+tX_{i}+\frac {t^{2}X_{i}^{2}}{2}+\frac{t^{3}X_{i}^{3}}{6}+\cdots \biggr] \\ =&1+\frac{t^{2}}{2}E^{\mathcal{F}}\bigl(X_{i}^{2} \bigr)+\frac{t^{3}}{6}E^{\mathcal {F}}\bigl(X_{i}^{3} \bigr)+\cdots \\ \leq&1+\frac{t^{2}}{2}\sigma_{i}^{2} \bigl(1+H|t|+H^{2}t^{2}+\cdots\bigr), \end{aligned}$$

where the second equality follows by the fact that \(E^{\mathcal {F}}(X_{i}) = 0\) and t is an \(\mathcal{F}\)-measurable random variable. If \(|t|\leq\frac{1}{H}\), then

$$\begin{aligned} E^{\mathcal{F}}\bigl[\exp(tX_{i})\bigr] \leq& 1+\frac{t^{2}\sigma_{i}^{2}}{2 ( 1-H\vert t\vert )} \\ \leq&\exp \biggl\{ \frac{t^{2}\sigma_{i}^{2}}{2(1-H|t|)} \biggr\} \quad \mbox{a.s.} \end{aligned}$$

For \(\mathcal{F}\)-measurable \(x\geq0\) and \(0\leq t\leq\frac{1}{H}\), we have that

$$\begin{aligned} P^{\mathcal{F}}\Biggl(\sum_{i=1}^{n}X_{i} \geq x\Biggr) =&P^{\mathcal{F}} \bigl(e^{t\sum _{i=1}^{n}X_{i}}\geq e^{tx} \bigr) \\ \leq&\frac{E^{\mathcal{F}} [e^{t\sum_{i=1}^{n}X_{i}} ]}{e^{tx}} \\ \leq&g(n) e^{-tx} \prod_{i=1}^{n} E^{\mathcal{F}} \bigl[e^{tX_{i}} \bigr] \\ \leq&g(n) e^{-tx} \prod_{i=1}^{n} \exp \biggl( \frac{t^{2}\sigma_{i}^{2}}{2 ( 1 - H t)} \biggr) \\ =& g(n)\exp{ \biggl( -tx + \frac{t^{2} B_{n}^{2}}{2(1-H t)} \biggr)} \quad \mbox{a.s.} \end{aligned}$$
(9)

Let

$$h(t) = -tx + \frac{t^{2} B_{n}^{2}}{2(1-Ht)} . $$

Then \(h(t)\) is minimized at

$$t= \frac{1}{H} \biggl[ 1 - \sqrt{\frac{B_{n}^{2}}{2Hx + B_{n}^{2}}} \biggr], $$

and substituting this value into the RHS of (9), after some algebraic manipulations, we obtain that

$$P^{\mathcal{F}} \Biggl( \sum_{i=1}^{n}X_{i} \geq x \Biggr) \leq g(n) \exp \biggl[ - \frac{1}{2 H^{2}} { \Bigl( \sqrt {2Hx + B_{n}^{2}} - \sqrt{B_{n}^{2}} \Bigr)}^{2} \biggr] \quad \mbox{a.s.} $$

Since \(-X_{1},-X_{2},\ldots,-X_{n}\) is also a sequence of \(\mathcal {F}\)-acceptable random variables, we obtain

$$P^{\mathcal{F}} \Biggl( \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert \geq x \Biggr) \leq2g(n) \exp \biggl[ - \frac{1}{2 H^{2}} { \Bigl( \sqrt{2Hx + B_{n}^{2}} - \sqrt{B_{n}^{2}} \Bigr)}^{2} \biggr] \quad \mbox{a.s.} $$

Using this inequality, we can prove that

$$\begin{aligned} \sum_{n=1}^{\infty}P^{\mathcal{F}} \biggl( \frac{|S_{n}|}{B_{n}^{2}}\geq x \biggr) \leq& 2\sum_{n=1}^{\infty}g(n) \exp \biggl[ - \frac{1}{2 H^{2}} { \Bigl( \sqrt{2H B_{n}^{2} x + B_{n}^{2}} - \sqrt{B_{n}^{2}} \Bigr)}^{2} \biggr] \\ =& 2\sum_{n=1}^{\infty}g(n) \exp \biggl\{ B_{n}^{2} \biggl[ - \frac{1}{2 H^{2}} { ( \sqrt{2H x + 1} - 1 )}^{2} \biggr] \biggr\} \\ < & \infty\quad \mbox{a.s.} \end{aligned}$$
(10)

 □

Remark 25

In the previous theorem, it is assumed that \(g(n) \leq K\) a.s. for every n, where K is finite a.s. However, we may have the complete convergence without this assumption. For example, the RHS of (10) may be finite even when g is not bounded. Similar statements can be made for Theorems 20 and 22.

3 Conclusions

In this paper, we define the class of conditionally acceptable random variables as a generalization of the class of acceptable random variables studied previously by Giuliano Antonini et al. [1], Shen et al. [3] and Sung et al. [4], among others. The idea of conditioning on a σ-algebra is gaining increasing popularity with potential applications in fields such as risk theory and actuarial science. For the class of conditionally acceptable random variables, we provide useful probability inequalities, mainly of the exponential type, which can be used to establish asymptotic results and, in particular, complete convergence results. We anticipate that the results presented in this paper will serve as a basis for research activity, which will yield further theoretical results and applications.

References

  1. Giuliano Antonini, R, Kozachenko, Y, Volodin, A: Convergence of series of dependent ϕ-subgaussian random variables. J. Math. Anal. Appl. 338, 1188-1203 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  2. Feller, W: An Introduction to Probability Theory and Its Applications, vol. II, 2nd edn. Wiley, New York (1971)

    MATH  Google Scholar 

  3. Shen, A, Hu, S, Volodin, A, Wang, X: Some exponential inequalities for acceptable random variables and complete convergence. J. Inequal. Appl. 2011, 142 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  4. Sung, SH, Srisuradetchai, P, Volodin, A: A note on the exponential inequality for a class of dependent random variables. J. Korean Stat. Soc. 40, 109-114 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  5. Wang, Y, Li, Y, Gao, Q: On the exponential inequality for acceptable random variables. J. Inequal. Appl. 2011, 40 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  6. Choi, J-Y, Baek, J-I: Exponential inequalities and complete convergence of extended acceptable random variables. J. Appl. Math. Inform. 31, 417-424 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  7. Wang, X, Xu, C, Hu, T-C, Volodin, A, Hu, S: On complete convergence for widely orthant-dependent random variables and its applications in nonparametric regression models. Test 23(3), 607-629 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  8. Wang, K, Wang, Y, Gao, Q: Uniform asymptotics for the finite-time ruin probability of a new dependent risk model with constant interest rate. Methodol. Comput. Appl. Probab. 15(1), 109-124 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  9. Chow, YS, Teicher, H: Probability Theory: Independence, Interchangeability, Martingales. Springer, New York (1978)

    Book  MATH  Google Scholar 

  10. Majerek, D, Nowak, W, Zieba, W: Conditional strong law of large numbers. Int. J. Pure Appl. Math. 20, 143-157 (2005)

    MATH  MathSciNet  Google Scholar 

  11. Roussas, GG: On conditional independence, mixing and association. Stoch. Anal. Appl. 26, 1274-1309 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  12. Prakasa Rao, BLS: Conditional independence, conditional mixing and conditional association. Ann. Inst. Stat. Math. 61, 441-460 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  13. Yuan, D-M, An, J, Wu, X-S: Conditional limit theorems for conditionally negatively associated random variables. Monatshefte Math. 161, 449-473 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  14. Petrov, VV: Limit Theorems of Probability Theory. Oxford University Press, London (1995)

    MATH  Google Scholar 

  15. Hoeffding, W: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58, 13-30 (1963)

    Article  MATH  MathSciNet  Google Scholar 

  16. Yuan, D-M, Xie, Y: Conditional limit theorems for conditionally linearly negative quadrant dependent random variables. Monatshefte Math. 166, 281-299 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  17. Shen, A, Wu, R: Some probability inequalities for a class of random variables and their applications. J. Inequal. Appl. 2013, 57 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  18. Gut, A: Probability: A Graduate Course. Springer, Berlin (2005)

    MATH  Google Scholar 

  19. Christofides, TC, Hadjikyriakou, M: Conditional demimartingales and related results. J. Math. Anal. Appl. 398, 380-391 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  20. Nagaev, SV: Large deviations of sums of independent random variables. Ann. Probab. 7, 745-789 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  21. Teicher, H: Exponential bounds for large deviations of sums of unbounded random variables. Sankhya 46, 41-53 (1984)

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the two anonymous referees for their valuable comments, which led to a much improved version of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tasos C Christofides.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Christofides, T.C., Fazekas, I. & Hadjikyriakou, M. Conditional acceptability of random variables. J Inequal Appl 2016, 149 (2016). https://doi.org/10.1186/s13660-016-1093-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1093-1

MSC

Keywords