- Research
- Open access
- Published:
Some mean convergence theorems for arrays of rowwise pairwise negative quadrant dependent random variables
Journal of Inequalities and Applications volume 2018, Article number: 221 (2018)
Abstract
For arrays of rowwise pairwise negative quadrant dependent random variables, conditions are provided under which weighted averages converge in mean to 0 thereby extending a result of Chandra, and conditions are also provided under which normed and centered row sums converge in mean to 0. These results are new even if the random variables in each row of the array are independent. Examples are provided showing (i) that the results can fail if the rowwise pairwise negative quadrant dependent hypotheses are dispensed with, and (ii) that almost sure convergence does not necessarily hold.
1 Introduction
For a sequence of independent and identically distributed (i.i.d.) random variables \(\{X_{n}, n \geq 1 \}\) with \(\mathbb{E}X_{1} = 0\), Pyke and Root [12] established the degenerate mean convergence law
A considerably simpler proof of the limit law (1.1) was obtained by Dharmadhikari [4] who did not refer to the Pyke and Root [12] article. Chandra [3] established the following more general result for mean convergence of weighted averages. Its proof is more natural, straightforward, and powerful than that of Dharmadhikari [4]. Chandra’s [3] method is novel in the sense that the level of truncation does not depend on n (the sample size), whereas Dharmadhikari [4] used the truncation level \(\sqrt{n}\). The limit law (1.1) is obtained immediately from the Chandra [3] result by taking \(a_{n,j} = n^{-1}\), \(1 \leq j \leq n\), \(n \geq 1\).
Theorem 1.1
(Chandra [3], Theorem 1)
Let \(\{X_{n}, n \geq 1 \}\) be a sequence of pairwise i.i.d. random variables with \(\mathbb{E}X_{1} = 0\), and let \(\{a_{n,j}, 1 \leq j \leq n, n \geq 1 \}\) be a triangular array of constants such that
Then
In the current work, we extend in Theorems 3.1 and 3.2 this degenerate mean convergence theorem of Chandra [3] in two directions:
-
(i)
Our results pertain to weighted averages either from an array of random variables whose nth row is comprised of \(k_{n}\) pairwise negative quadrant dependent random variables, \(n \geq 1\) (Theorem 3.1) or from an array of random variables whose nth row is comprised of \(k_{n}\) pairwise independent random variables, \(n \geq 1\) (Theorem 3.2). No independence or dependence conditions are imposed between the random variables from different rows of the arrays. The Chandra [3] result considered weighted averages from a sequence of pairwise i.i.d. random variables.
-
(ii)
The random variables that we consider are assumed to be stochastically dominated by a random variable which is a weaker assumption than the assumption of Chandra [3] that the random variables are identically distributed.
The third main result (Theorem 3.3) establishes for an array of random variables whose nth row is comprised of \(k_{n}\) pairwise negative quandrant dependent random variables, \(n \geq 1\) a degenerate mean convergence result for normed and centered row sums. In contradistinction to Theorems 3.1 and 3.2, weighted averages and stochastic domination play no role in Theorem 3.3. As in Theorems 3.1 and 3.2, no independence or dependence conditions are imposed between the random variables from different rows of the array in Theorem 3.3.
Definition 1.1
A finite set of random variables \(\{X_{1}, \ldots, X_{N} \}\) is said to be pairwise negative quadrant dependent (PNQD) if for all \(i, j \in \{1, \ldots, N\}\) (\(i \neq j\)) and all \(x, y \in \mathbb{R}\),
It is of course immediate that if \(X_{1}, \ldots, X_{N} \) are pairwise independent (a fortiori, independent) random variables, then \(\{X_{1}, \ldots, X_{N} \}\) is PNQD.
In many stochastic models, the classical assumption of independence among the random variables in the model is not a reasonable one; the random variable may be “repelling” in the sense that small values of any of the random variables increase the probability that the other random variables are large. Thus an assumption of some type of negative dependence is often more suitable. Pemantle [11] prepared an excellent survey on a general “theory of negative dependence”.
The choice of the adjective “negative” in the definition of PNQD random variables is due to the fact that (1.2) is equivalent to
provided \(\mathbb{P} ( X_{i} \leq x ) > 0\).
A collection of N PNQD random variables arises by sampling without replacement from a set of \(N \geq 2\) real numbers (see, e.g., Bozorgnia et al. [2]). Li et al. [7] showed that for every set of \(N \geq 2\) continuous distribution functions \(\{F_{1}, \ldots, F_{N} \}\), there exists a set of PNQD random variables \(\{X_{1}, \ldots, X_{N} \}\) such that the distribution function of \(X_{j}\) is \(F_{j}\), \(1 \leq j \leq N\) and such that for all \(j \in \{1, \ldots, N-1 \}\), \(X_{j}\) and \(X_{j+1}\) are not independent.
An array of random variables \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) is said to be rowwise PNQD if for each \(n \geq 1\), the set of random variables \(\{X_{n,j}, 1 \leq j \leq k_{n} \}\) is PNQD. There is interesting literature of investigation on the strong law of large numbers problem for row sums of rowwise PNQD arrays; see the discussion in Li et al. [7].
Definition 1.2
An array of random variables \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) is said to be stochastically dominated by a random variable X if there exists a constant D such that
Remark 1.1
Condition (1.3) is, of course, automatic with \(X = X_{1,1}\) and \(D = 1\) if the array \(\{X_{n,j}, 1 \leq j \leq k _{n}, n \geq 1 \}\) consists of identically distributed random variables.
2 Preliminary lemmas
Three lemmas will now be stated. Lemmas 2.1, 2.2, and 2.3 are used in the proof of Theorem 3.1, Lemma 2.3 is used in the proof of Theorem 3.2, and Lemmas 2.1 and 2.2 are used in the proof of Theorem 3.3.
Lemma 2.1 follows from Lemma 1 of Lehmann [6]; see Matuła [8] for a more direct proof.
Lemma 2.1
Let the set of random variables \(\{ X_{1}, \ldots, X_{N} \} \) be PNQD, and for each \(j \in \{1, \ldots, N \}\), let \(f_{j}: \mathbb{R} \rightarrow \mathbb{R}\). If the functions \(f_{1}, \ldots, f_{N}\) are all nondecreasing or all nonincreasing, then the set of random variables \(\{ f_{1}(X _{1}), \ldots, f_{N}(X_{N}) \} \) is PNQD.
The next lemma is well known (see, e.g., Patterson and Taylor [10]).
Lemma 2.2
Let the set of random variables \(\{X_{1}, \ldots, X _{N} \}\) be PNQD. Then
The following lemma is essentially due to Adler et al. [1].
Lemma 2.3
(Adler et al. [1])
Let \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of random variables which is stochastically dominated by a random variable X, and let D be as in (1.3). Then
3 Mainstream
The main results, Theorems 3.1–3.3, may now be established. These are new results even under the stronger hypothesis that the random variables in each row of the array are independent.
Theorem 3.1
Let \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of rowwise PNQD mean 0 random variables which is stochastically dominated by a random variable X with \(\mathbb{E}\vert X \vert < \infty \). Let \(\{a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of constants such that
and
Then
and, a fortiori,
Proof
Let \(\epsilon > 0\) be arbitrary. Set \(C = \sup_{n \geq 1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert \). Let \(D < \infty \) be as in (1.3). Since \(\mathbb{E}\vert X \vert < \infty \), we can choose \(A_{\epsilon } \in (0, \infty )\) such that
Let
and
Then
and so
It follows from Lemma 2.1 that \(\{Y_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) is an array of rowwise PNQD random variables. Again by Lemma 2.1, (3.1) ensures that \(\{a_{n,j}Y_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) is an array of rowwise PNQD random variables. Note that \(\vert Y_{n,j} \vert \leq A_{\epsilon }\), \(1 \leq j \leq k_{n}\), \(n \geq 1\). Thus, for \(n \geq 1\), by Lemma 2.2
by the second half of (3.2). Thus
and, a fortiori,
Next, for \(n \geq 1\), by Lemma 2.3 and (1.3)
by the choice of \(A_{\epsilon }\).
Combining (3.4) and (3.5) yields
Since \(\epsilon > 0\) is arbitrary,
that is, (3.3) holds. □
Remark 3.1
One of the reviewers so kindly called to our attention the article by Ordóñez Cabrera and Volodin [9] and suggested that we should provide a comparison between Theorem 3.1 above and Theorem 1 of that article. Both theorems are in the same spirit in that they both establish mean convergence for weighted averages from an array of rowwise PNQD mean 0 random variables. Ordóñez Cabrera and Volodin [9] introduced the following new integrability concept for an array of random variables \(\{X_{n,j}, u_{n} \leq j \leq k_{n}, n \geq 1 \}\) which is weaker than several well-known integrability notions. The array of random variables is said to be h-integrable with respect to an array of constants \(\{a_{n,j}, u_{n} \leq j \leq k _{n}, n \geq 1 \}\) if
where \(\{h(n), n \geq 1 \}\) is a sequence of constants with \(0 < h(n) \uparrow \infty \). Ordóñez Cabrera and Volodin [9] established their Theorem 1 under an h-integrability assumption for the array. Suppose that \(u_{n} = 1\), \(n \geq 1\). It is clear that the stochastic domination condition in Theorem 3.1 is indeed a stronger condition than the array being h-integrable. However, Theorem 1 of Ordóñez Cabrera and Volodin [9] has the condition \(\lim_{n \rightarrow \infty } h^{2}(n) \sum_{j=1}^{k_{n}} a_{n,j}^{2} = 0\) which is stronger than the condition \(\lim_{n \rightarrow \infty } \sum_{j=1}^{k_{n}} a_{n,j}^{2} = 0\) in (3.2) of Theorem 3.1. Consequently, the two theorems being compared overlap with each other but neither theorem is contained in the other.
The next theorem is a version of Theorem 3.1 without assumption (3.1) for an array of random variables where the random variables in each row of the array are pairwise independent (which is a stronger assumption than the array being rowwise PNQD).
Theorem 3.2
Let \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of mean 0 random variables such that, for each \(n \geq 1\), the random variables \(X_{n,j}, 1 \leq j \leq k_{n}\) are pairwise independent. Suppose that the array \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) is stochastically dominated by a random variable X with \(\mathbb{E}\vert X \vert < \infty \). Let \(\{a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of constants such that
Then
and, a fortiori,
Proof
Let \(\epsilon > 0\) be arbitrary, and let C, D, \(A_{\epsilon }\), \(Y_{n,j}\), and \(Z_{n,j}\), \(1 \leq j \leq k_{n}\), \(n \geq 1\) be as in the proof of Theorem 3.1. The pairwise independence assumption ensures that
and (3.4) follows arguing as in the proof of Theorem 3.1. Moreover, (3.5) holds by the same argument as in the proof of Theorem 3.1. The rest of the proof is identical to that in Theorem 3.1. □
Remark 3.2
The cited result of Chandra [3] follows immediately from Theorem 3.2 by taking \(k_{n} = n\), \(n \geq 1\) and \(X_{n,j} = X_{j}\), \(1 \leq j \leq n\), \(n \geq 1\).
Remark 3.3
If the rowwise PNQD hypothesis in Theorem 3.1 is dispensed with, then the theorem can fail. To see this, let X be a nondegenerate mean 0 random variable, let \(k_{n} = n\), \(n \geq 1\), and let
Then \(\{X_{n,j}, 1 \leq j \leq n, n \geq 1\}\) is not an array of PNQD random variables, (3.1) and (3.2) hold, but
This same example shows that Theorem 3.2 can fail without the pairwise independent hypothesis.
We now show via an example that the hypotheses to Theorems 3.1 and 3.2 do not necessarily ensure that \(\sum_{j=1}^{k_{n}} a_{n,j}X_{n,j} \longrightarrow 0\) almost surely (a.s.).
Example 3.1
Let \(\{X_{n}, n \geq 1 \}\) be a sequence of i.i.d. random variables with \(\mathbb{E}X_{1} = 0\) and \(\mathbb{E}\vert X_{1} \vert ^{p} = \infty \) for some \(p > 1\). Set \(k_{n} = n\), \(n \geq 1\), \(X_{n,j} = X _{j}\), \(1 \leq j \leq k_{n}\), \(n \geq 1\), and
Then (3.2) holds since
and
All of the hypotheses of Theorems 3.1 and 3.2 are satisfied and hence (3.3) holds.
Note that \(\{ \sum_{j=1}^{k_{n}} a_{n,j}X_{n,j} = n^{-1/p}X_{n}, n \geq 1 \} \) is a sequence of independent random variables. Now, for arbitrary \(M \geq 1\),
since \(\mathbb{E}\vert X_{1} \vert ^{p} = \infty \). Then by the second Borel–Cantelli lemma,
and so
Thus,
and so \(\sum_{j=1}^{k_{n}} a_{n,j} X_{n,j} \rightarrow 0\) a.s. fails.
We now establish Theorem 3.3. Throughout the rest of this section, for an array of random variables \(\{ X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \} \), let \(S_{n} = \sum_{j=1}^{k_{n}} X_{n,j}\), \(n \geq 1\).
Theorem 3.3
Let \(\{ X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \} \) be an array of rowwise PNQD \(\mathscr{L}_{1}\) random variables. Let \(g:[0, \infty ) \rightarrow [0, \infty )\) be a continuous function with
Let \(\{b_{n}, n \geq 1 \}\) be a sequence of positive constants with \(b_{n} \uparrow \infty \) and suppose that there exists a sequence of positive constants \(\{\alpha_{n}, n \geq 1 \}\) such that
Set
and assume that \(\mathbb{E}V_{n,j} < \infty \), \(1 \leq j \leq k_{n}\), \(n \geq 1\). Let \(\{d_{n}, n \geq 1\}\) be a sequence of positive constants and suppose for some sequence of positive constants \(\{c_{n}, n \geq 1 \}\) with \(c_{n} < b_{n}\), \(n \geq 1\) that
and
Then
and, a fortiori,
Proof
For \(1 \leq j \leq k_{n}\) and \(n \geq 1\), set
and
Then \(Y_{n,j} + Z_{n,j} = X_{n,j}\), \(1 \leq j \leq k_{n}\), \(n \geq 1\). Set \(T_{n} = \sum_{j=1}^{k_{n}}Y_{n,j}\), \(n \geq 1\). We will show that
and
To prove (3.13), note that for \(1 \leq j \leq k_{n}\) and \(n \geq 1\),
and hence
To prove (3.14), note that for \(1 \leq j \leq k_{n}\) and \(n \geq 1\),
Then for \(n \geq 1\), since the set of random variables \(\{Y_{n,j}, 1 \leq j \leq k_{n} \}\) is PNQD by Lemma 2.1,
Thus
and hence (3.14) holds.
Finally, note that for \(n \geq 1\),
Now it follows from (3.13) that
The conclusion (3.12) follows from (3.15), (3.16), and (3.14). □
Corollary 3.1
Let \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be a uniformly bounded array of rowwise PNQD random variables. Let \(\{b_{n}, n \geq 1 \}\) be a sequence of constants with \(1 < b_{n} \uparrow \infty \). Then
Proof
Let \(d_{n} = \sqrt{k_{n}b_{n}}\), \(n \geq 1\) and \(c_{n} = \sqrt{b_{n}}\), \(n \geq 1\). Let \(g(v) = v\), \(v \geq 0\) and \(\alpha_{n} = 1\), \(n \geq 1\). Set
Since the array is comprised of uniformly bounded random variables, conditions (3.7), (3.8), (3.9), and (3.10) hold. Moreover, (3.11) also holds since \(c_{n} = \mathit{o} ( b_{n} ) \). The conclusion (3.17) follows from Theorem 3.3. □
Remark 3.4
If the rowwise PNQD hypothesis in Theorem 3.3 or Corollary 3.1 is dispensed with, then those results can fail. To see this, let \(\{k_{n}, n \geq 1 \}\) be a sequence of integers with \(1 < k_{n} \uparrow \infty \), let X be a bounded nondegenerate random variable, and set \(X_{n,j} = X\), \(1 \leq j \leq k_{n}\), \(n \geq 1\). Let \(b_{n} = k_{n}\), \(n \geq 1\). All of the conditions of Corollary 3.1 (hence of Theorem 3.3) are satisfied except for the rowwise PNQD hypothesis. The conclusions of Corollary 3.1 (hence of Theorem 3.3) fail since
We now show via an example that the hypotheses of Corollary 3.1 (hence of Theorem 3.3) do not necessarily ensure that
Example 3.2
Let \(\{X_{n}, n \geq 1\}\) be a sequence of nondegenerate i.i.d. uniformly bounded random variables, and let
Let \(X_{n,j} = X_{j}\), \(1 \leq j \leq n\), \(n \geq 1\). The hypotheses of Corollary 3.1 (hence of Theorem 3.3) are satisfied and so (3.12) and (3.17) hold. But by the Hartman and Wintner [5] law of the iterated logarithm,
and so (3.18) does not hold.
Corollary 3.2
Let \(\{X_{n,j}, 1 \leq j \leq n, n \geq 1 \}\) be an array of identically distributed rowwise PNQD \(\mathscr{L}_{1}\) random variables, and let \(\{b_{n}, n \geq 1 \}\) be a sequence of constants with \(1 < b_{n} \uparrow \infty \). If
then
Proof
We will apply Theorem 3.3 with \(g(v) = v\), \(0 \leq v < \infty \) and
Then \(c_{n} < b_{n}\), \(n \geq 1\) and (3.6) and (3.11) are immediate. Condition (3.7) is the same as (3.19). Since \(\mathbb{E}\vert X_{1,1} \vert < \infty \), conditions (3.8) and (3.9) are immediate. Condition (3.10) reduces to
which holds since \(\mathbb{E}\vert X_{1,1} \vert < \infty \). The conclusion (3.20) follows from Theorem 3.3. □
4 Conclusions
For an array of rowwise PNQD random variables \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\), conditions are provided under which the following degenerate mean convergence laws hold:
-
(i)
$$\sum_{j=1}^{k_{n}}a_{n,j}X_{n,j} \stackrel{\mathscr{L}_{1}}{\longrightarrow} 0, $$
where \(\mathbb{E}X_{n,j} = 0\), \(1 \leq j \leq k_{n}\), \(n \geq 1\), and \(\{ a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \} \) is an array of constants;
-
(ii)
$$\frac{\sum_{j=1}^{k_{n}} ( X_{n,j} - \mathbb{E}X_{n,j} ) }{d_{n}} \stackrel{\mathscr{L}_{1}}{\longrightarrow } 0, $$
where \(\{d_{n}, n \geq 1 \}\) is a sequence of positive constants.
A version of the result in (i) is also obtained for an array of rowwise pairwise independent random variables and this result extends the result of Chandra [3]. Examples are provided showing that the above results can fail if the hypotheses are weakened and that a.s. convergence does not necessarily hold together with the \(\mathscr{L}_{1}\) convergence.
References
Adler, A., Rosalsky, A., Taylor, R.L.: Strong laws of large numbers for weighted sums of random elements in normed linear spaces. Int. J. Math. Math. Sci. 12, 507–529 (1989)
Bozorgnia, A., Patterson, R.F., Taylor, R.L.: Limit theorems for dependent random variables. In: Proceedings of the First World Congress of Nonlinear Analysts (WCNA’92), Tampa, FL, 1992, vol. II, pp. 1639–1650. de Gruyter, Berlin (1996)
Chandra, T.K.: On extensions of a result of S. W. Dharmadhikari. Bull. Calcutta Math. Soc. 82, 431–434 (1990)
Dharmadhikari, S.W.: A simple proof of mean convergence in the law of large numbers. Am. Math. Mon. 83, 474–475 (1976)
Hartman, P., Wintner, A.: On the law of the iterated logarithm. Am. J. Math. 63, 169–176 (1941)
Lehmann, E.L.: Some concepts of dependence. Ann. Math. Stat. 37, 1137–1153 (1966)
Li, D., Rosalsky, A., Volodin, A.I.: On the strong law of large numbers for sequences of pairwise negative quadrant dependent random variables. Bull. Inst. Math. Acad. Sin. (N.S.) 1, 281–305 (2006)
Matuła, P.: A note on almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 15, 209–213 (1992)
Ordóñez Cabrera, M., Volodin, A.I.: Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability. J. Math. Anal. Appl. 305, 644–658 (2005)
Patterson, R.F., Taylor, R.L.: Strong laws of large numbers for negatively dependent random elements. Nonlinear Anal., Theory Methods Appl. 30(7), 4229–4235 (1997). https://doi.org/10.1016/S0362-546X(97)00279-4
Pemantle, R.: Towards a theory of negative dependence. J. Math. Phys. 41, 1371–1390 (2000)
Pyke, R., Root, D.: On convergence in r-mean of normalized partial sums. Ann. Math. Stat. 39, 379–381 (1968)
Acknowledgements
The authors are grateful to the reviewers for carefully reading the manuscript and for offering helpful comments and suggestions.
Authors’ information
Tapas K. Chandra deceased before publication of this work was completed.
Funding
The research of Deli Li was partially supported by a grant from the Natural Sciences and Engineering Research Council of Canada (Grant #: RGPIN-2014-05428).
Author information
Authors and Affiliations
Contributions
All authors contributed equally and significantly in writing this article. All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Chandra, T.K., Li, D. & Rosalsky, A. Some mean convergence theorems for arrays of rowwise pairwise negative quadrant dependent random variables. J Inequal Appl 2018, 221 (2018). https://doi.org/10.1186/s13660-018-1811-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1811-y