Strong convergence theorems for coordinatewise negatively associated random vectors in Hilbert space

In this work, some strong convergence theorems are established for weighted sums of coordinatewise negatively associated random vectors in Hilbert spaces. The results obtained in this paper improve and extend the corresponding ones of Huan et al. (Acta Math. Hung. 144(1):132–149, 2014) as well as correct and improve the corresponding one of Ko (J. Inequal. Appl. 2017:290, 2017).


Introduction
The concept of the complete convergence was first introduced by Hsu and Robbins [3] to prove that the arithmetic mean of independent and identically distributed (i.i.d.) random variables converges completely to the expectation of the random variables. Later on, Baum and Katz [4] generalized and extended this fundamental theorem as follows.
Theorem A Let α and r be real numbers such that r > 1, α > 1/2 and αr > 1 and let {X n , n ≥ 1} be a sequence of i.i.d. random variables with zero mean. Then the following statements are equivalent: (a) E|X 1 | r < ∞; Since the independence assumption is not reasonable in the real practice of applications in many statistical problems. This result has been extended to many classes of dependent random variables. A classical extension of independence is negative association, which was introduced by Joag-Dev and Proschan [5] as follows. Cov f 1 (X i , i ∈ A), f 2 (X j , j ∈ B) ≤ 0, whenever the covariance above exists. An infinite family of random variables is NA if every finite subfamily is NA.
Let H be a real separable Hilbert space with the norm · generated by an inner product ·, · . Denote X (j) = X, e (j) , where {e (j) , j ≥ 1} is an orthonormal basis in H, and X is an Hvalued random vector. Ko et al. [10] introduced the following concept of H-valued NA sequence.
Ko et al. [10] and Thanh [11], respectively, obtained the almost sure convergence for NA random vectors in Hilbert space. Miao [12] established the Hajeck-Renyi inequality for H-valued NA random vectors.
Huan et al. [1] introduced the concept of coordinatewise negative association for random vectors in Hilbert space as follows, which is more general than that of Definition 1.2.  Obviously, if a sequence of random vectors in Hilbert space is NA, it is CNA. However, generally speaking, the reverse is not true. One can see in Example 1.4 of Huan et al. [1].
Huan et al. [1] extended Theorem A from independence to the case of CNA random vectors in Hilbert space. Huan [13] extended this complete convergence result for H-valued CNA random vectors to the case of 1 < r < 2 and αr = 1. However, the interesting case r = 1, αr = 1 was not considered in these papers. Recently, Ko [2] extended the results of Huan et al. [1] from the complete convergence to the complete moment convergence as follows. For more details as regards the complete moment convergence, one can refer to Ko [2] and the references therein.
However, there are some mistakes in the proof of the result in the case r = 1. In specific, the formulas u 1 y r-2 dy ≤ Cu r-1 of Eq. (2.7) and m n=1 n αr-1-α ≤ Cm αr-α of Eq. (2.9) in Ko [2] are wrong when r = 1, the same problem also occurs in the proof of I 222 (see the proof of Lemma 2.5 in Ko [2]). Moreover, the interesting case αr = 1 was not considered in this paper.
In this paper, the results of the complete convergence and the complete moment convergence are established for CNA random vectors in Hilbert spaces. The results are focused on the weighted sums, which is more general than partial sums. The interesting case αr = 1 is also considered in this article. Moreover, the results of the complete moment convergence are considered with the exponent 0 < q < 2 while in Theorem B only the case q = 1 was obtained.
Recall that if n -1 n i=1 P(|X for all j ≥ 1, n ≥ 1 and x ≥ 0, then the sequence {X n , n ≥ 1} is said to be coordinatewise weakly upper bounded by X, where X (j) n = X, e (j) and X (j) = X, e (j) . Throughout the paper, let C be a positive constant whose value may vary in different places. Let log x = ln max(x, e) and I(·) be the indicator function.

Preliminaries
In this section, we state some lemmas which will be used in the proofs of our main results.

Lemma 2.1 (Huan et al. [1]) Let {X n , n ≥ 1} be a sequence of H-valued CNA random vectors with zero means and
Lemma 2.2 (Kuczmaszewska [7]) Let {Z n , n ≥ 1} be a sequence of random variables weakly dominated by a random variable Z, that is, for any x ≥ 0. Then, for any a > 0 and b > 0, there exist some positive constants C 1 and C 2 such that be a sequence of zero mean H-valued CNA random vectors. Suppose that {X n , n ≥ 1} is coordinatewise weakly upper bounded by a random vector X. Assume that one of the following assumptions holds: Proof Without loss of generality, we may assume that a ni ≥ 0 for each 1 ≤ i ≤ n, n ≥ 1. For any t > 0 and each j ≥ 1, denote It is easy to obtain By Lemma 2.2, we derive that Therefore, if q < r, and if r < q < 2, To estimate J 2 , we first show that Actually, noting by the Hölder inequality that n i=1 a ni = O(n), we have by the zero mean assumption sup provided that αr > 1. If αr = 1, the conclusion above remains true by the dominated convergence theorem. Therefore, when n is large enough, for any t ≥ n αq , Since  (1), Similar to the proof of J 1 < ∞, we have J 21 < ∞. Finally, we will estimate J 22 . By some standard calculation, we have n αr-αq-1 .
Since the upper bound of m n=1 n αr-αq-1 is different by choosing different values of q, we consider the following three cases. If q < r, we have and if r < q < 2, we have Consequently, the proof of the lemma is completed.

Main results and discussion
In this section, we will present the main results and their proofs as follows.
Proof Without loss of generality, we may assume that a ni ≥ 0 for each 1 ≤ i ≤ n, n ≥ 1. For each n ≥ 1 and each j ≥ 1, denote

It is easy to obtain
By weakly upper bounded assumption and Lemma 2.2, we have To estimate I 2 , we first show that Note by the Hölder inequality that n i=1 a ni = O(n). So we have by the zero mean assumption, if αr > 1, and, if αr = 1, the conclusion above also remains true by the dominated convergence theorem. Therefore, when n is large enough, Noting that {a ni U (j) i , 1 ≤ i ≤ n, n ≥ 1} is NA for any j ≥ 1, one can see that {a ni (U i -EU i ), 1 ≤ i ≤ n, n ≥ 1} is CNA. Hence, we have by the Markov inequality, Lemmas 2.1 and 2.2, n i=1 a 2 ni = O(n) and (2) Similar to the proof of I 1 < ∞, we have I 21 < ∞. Finally, we will estimate I 22 . It is easy to see that The proof is completed.
Remark 3.1 Theorem 3.1 concerns the weighted sums of random vectors in Hilbert space. If we take a ni = 1 for any 1 ≤ i ≤ n, n ≥ 1, the result is still stronger than the corresponding one of Huan et al. [1] since the case αr = 1 was not considered in Huan et al. [1]; Huan [13] considered the case αr = 1 for the partial sums of random vectors in Hilbert space, but 1 < r < 2 was assumed in that paper. Therefore, Theorem 3.1 improves the corresponding results of Huan et al. [1] and Huan [13], respectively.
Proof Applying Theorem 3.1 with a ni = a i , for each 1 ≤ i ≤ n, n ≥ 1 and α = 1/r, we have, for any ε > 0, which together with the Borel-Cantelli lemma shows that, as m → ∞, Noting that, for any fixed n, there exists a positive integer m such that 2 m ≤ n < 2 m+1 , we have The proof is completed. Theorem 3.3 Let 1 ≤ r < 2 and αr ≥ 1. Let {a ni , 1 ≤ i ≤ n, n ≥ 1} be an array of real numbers such that n i=1 a 2 ni = O(n). Let {X n , n ≥ 1} be a sequence of zero mean H-valued CNA random vectors. Suppose that {X n , n ≥ 1} is coordinatewise weakly upper bounded by a random vector X. Assume that one of the following assumptions holds: Proof From Theorem 3.1 and Lemma 2.3 we can see that The proof is completed.
Remark 3.2 As stated in Sect. 1, the corresponding result in Ko [2] is wrongly established when r = 1. If we take a ni = 1 for any 1 ≤ i ≤ n, n ≥ 1, q = 1, Theorem 3.3 is equivalent to the corresponding one of Ko [2] when 1 < r < 2, αr > 1. The interesting case αr = 1, which was not considered in Ko [2], is also considered here. Consequently, Theorem 3.3 generalizes and improves the corresponding result of Ko [2].
The proof is completed.

Conclusions
In this paper, we investigate the complete convergence and the complete moment convergence for sequences of coordinatewise negatively associated random vectors in Hilbert spaces. The obtained results in this paper improve and extend the corresponding theorems of Huan et al. [1] as well as correct and improve the corresponding one of Ko [2].