An improvement of convergence rate in the local limit theorem for integral-valued random variables

Let X1,X2,…,Xn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$X_{1}, X_{2}, \ldots , X_{n}$\end{document} be independent integral-valued random variables, and let Sn=∑j=1nXj\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$S_{n}=\sum_{j=1}^{n}X_{j}$\end{document}. One of the interesting probabilities is the probability at a particular point, i.e., the density of Sn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$S_{n}$\end{document}. The theorem that gives the estimation of this probability is called the local limit theorem. This theorem can be useful in finance, biology, etc. Petrov (Sums of Independent Random Variables, 1975) gave the rate O(1n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O (\frac{1}{n} )$\end{document} of the local limit theorem with finite third moment condition. Most of the bounds of convergence are usually defined with the symbol O. Giuliano Antonini and Weber (Bernoulli 23(4B):3268–3310, 2017) were the first who gave the explicit constant C of error bound Cn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac{C}{\sqrt{n}}$\end{document}. In this paper, we improve the convergence rate and constants of error bounds in local limit theorem for Sn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$S_{n}$\end{document}. Our constants are less complicated than before, and thus easy to use.


Introduction
Let X 1 , X 2 , . . . , X n be independent integral-valued random variables with means μ j and variances σ 2 j for j = 1, 2, . . . , n. Let S n = n j=1 X j , μ = n j=1 μ j , and σ 2 = n j=1 σ 2 j . One of the interesting probabilities is the probability at a particular point, i.e., P(S n = k), where k = 1, 2, . . . . There are two density functions, i.e., discretized normal and normal, to approximate this probability. The discretized normal random variable ( Z μ,σ 2 ) has the probability mass function To approximate P(S n = k) by using the discretized normal density function, we can apply the Berry-Esseen theorem. Berry [3] and Esseen [4] were the first two mathematicians who gave the bound between P(S n ≤ k) and the normal distribution. Here is their result.
The local limit theorem describes how the probability mass function of a sum of independent discrete random variables approaches the normal density. Let De Moivre and Laplace (see [11]) established the local limit theoremDe Moivre and Laplace (see [11]) established the local limit theorem for the binomial case in 1754. For sums of independent random variables, we can prove the local limit theorem by using the Berry-Esseen theorem and get the rate convergence O( 1 √ n ) (see [2]). In 1971, Ibragimov and Linnik improved the rate of convergence from O( 1 √ n ) to O( 1 n 1 2 +α ), 0 < α < 1 2 , in the case of X j s being identical and square integrable random variables. For the non-identical case, Petrov (1975, [1]) showed that if 1 σ 2 → ∞ as n → ∞, 2 n j=1 E|X jμ j | 3 = O(σ 2 ), 3 P(X j = 0) ≥ P(X j = m) for all j and m and 4 gcd{m : 1 log n n j=1 P(X j = 0)P(X j = m) → ∞ as n → ∞} = 1, then Furthermore, Petrov ([1], see also [2]) improved the rate of convergence from O( 1 σ 2 ) to O( 1 n √ n ) in the case of a symmetric binomial. In the previous studies, no one gave the explicit constants of error bounds. Most of the theorems were usually presented in the form of O. Therefore, finding the constants has been interesting. In 2018, Zolotukhin, Nagaev, and Chebotarev [12] gave the convergence with a constant of error bound in the case that S n is a binomial. They showed that After that Siripraparat and Neammanee [13] relaxed the identically condition and obtained the convergence in the case of Poisson binomial in 2020. Their result is Furthermore, in the case of S n = Bi( 1 2 ) being a symmetric binomial, i.e., P(X j = 1) = 1 2 = 1 -P(X j = 0), they showed that In 2017, Giuliano Antonini and Weber [2] gave the rate of convergence O( 1 σ ) with a constant of error bound in the case of sums of independent lattice random variables. X is a lattice random variable when the value of X is in L(a, b) = {v k }, where v k = a + bk, k ∈ Z, a and b > 0 are real numbers. They gave the following theorem. Theorem 1.1 (See [2]) Let X 1 , X 2 , . . . , X n be independent square integrable random variables taking values in a lattice L(a, b) and S n = n j=1 X j . Let α X = k∈Z min{P(X = v k ), P(X = v k+1 )} and V j s, L j s, j s be such that where P(L j = 0) = P(L j = 1) = 1 2 , P( j = 1) = 1 -P( j = 0) = q j , where 0 < q j ≤ α X j for all j = 1, 2, . . . , n, and (V j , j ) and L j are independent for each j = 1, 2, . . . , n.
Assume that 1 log λ n λ n ≤ 1 14 , where λ n = n j=1 q j 2 (k-ES n ) 2 Var(S n ) ≤ ( λ n 14 log λ n ) 1 2 for all k ∈ L(na, b). Then Note that if we choose the constant of error bound C 3 in (5), then C 2 is 36.1082 and the rate of Theorem 1.1 is O( 1 σ ). We can see that the bound of [2] depends on C 3 and is still complicated. In this work, we improve the rate of convergence of [2] to be O( 1 σ 2 ) and also give the constant of error bound. Our constants are not complicated and can be applied easily. The results are shown in the following. Theorem 1.2 Let X 1 , X 2 , . . . , X n be independent integral-valued random variables and α j = 2 ∞ l=-∞ p jl p j(l+1) , where p jl = P(X j = l). If α j > 0 for all j = 1, 2, . . . , n, then b is said to be maximal when there are no other numbers a and b > b for which P(X ∈ L(a , b )) = 1. Theorem 1.3 Let X 1 , X 2 , . . . , X n be independent random variables in a maximal lattice L(a, b) and Then where α j = 2 ∞ l=-∞ p jl p j(l+1) , p jl = P(X j = a + bl), and α = n j=1 α j .
Observe that the constant in Theorems 1.2-1.4 is easier than the constant in Theorem 1.1.
We organize this paper as follows. In Sect. 2, we give the exponential bounds of a characteristic function which will be used to prove the main theorems in Sect. 3. After that we give some examples in Sect. 4.

Exponential bounds of a characteristic function
In this section, we let X be an integral-valued random variable with characteristic function ψ and θ (t) = argument of ψ(t). Then Characteristic functions are important in probability theory and statistics, especially in local limit theorems, stability problems, etc. In the study of local limit theorems, it is required to estimate the bounds for modulus |ψ(t)| of a characteristic function ψ. The various bounds for |ψ(t)| play a key role in the investigation of the rate of convergence in the local limit theorems. Previous studies have shown the bounds for |ψ(t)| in the case of continuous and bounded random variable in a variety of versions (see [14][15][16][17][18] for example). In addition, the bounds for |ψ(t)| of a lattice random variable have been shown in a number of research works (see [18][19][20][21] for example). Furthermore, there is the exponential bound for |ψ(t)| of a Poisson binomial distribution as shown in Neammanee [22].
In this section, we use the idea of Neammanee [22] to obtain the exponential bound for |ψ(t)| of an integral-valued random variable. The following lemmas are our results.
We are now ready to prove Theorem 1.2.

Examples of the main results
In this section, we give applications including Poisson binomial, binomial, and negative binomial that our main theorems can be applied as shown in Example 1-Example 3. In addition, the example that our main results can be applied to, unlike the result of Petrov [1], as shown in Example 4.
Example 1 If X 1 , X 2 , . . . , X n are independent Bernoulli random variables with P(X j = 1) = p j and P(X j = 0) = q j , where p j + q j = 1 for j = 1, 2, . . . , n, S n is a Poisson binomial random variable. Then where μ = n j=1 p j and σ 2 = n j=1 p j q j .
Proof Note that E|X j | 3 = p j and α = Hence, by Theorem 1.2, we see that (42) holds.
Example 2 Let S n ∼ Bi(p). Then Proof We can apply Example 1 by letting p j = p and q j = q.
Observe that the results in Example 1 and Example 2 have the same order as (3) and (4) but the constants are bigger. However, (3) and (4) cannot be applied with the following example.
Proof Let ψ be the characteristic function of X j . Then ψ(t) = pe it 1-qe it and Hence, by Theorem 1.4, we get (43).