- Research
- Open access
- Published:
An improvement of convergence rate in the local limit theorem for integral-valued random variables
Journal of Inequalities and Applications volume 2021, Article number: 57 (2021)
Abstract
Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables, and let \(S_{n}=\sum_{j=1}^{n}X_{j}\). One of the interesting probabilities is the probability at a particular point, i.e., the density of \(S_{n}\). The theorem that gives the estimation of this probability is called the local limit theorem. This theorem can be useful in finance, biology, etc. Petrov (Sums of Independent Random Variables, 1975) gave the rate \(O (\frac{1}{n} )\) of the local limit theorem with finite third moment condition. Most of the bounds of convergence are usually defined with the symbol O. Giuliano Antonini and Weber (Bernoulli 23(4B):3268–3310, 2017) were the first who gave the explicit constant C of error bound \(\frac{C}{\sqrt{n}}\). In this paper, we improve the convergence rate and constants of error bounds in local limit theorem for \(S_{n}\). Our constants are less complicated than before, and thus easy to use.
1 Introduction
Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables with means \(\mu _{j}\) and variances \(\sigma _{j}^{2}\) for \(j=1, 2, \ldots , n\). Let \(S_{n}=\sum_{j=1}^{n}X_{j}\), \(\mu =\sum_{j=1}^{n}\mu _{j}\), and \(\sigma ^{2}=\sum_{j=1}^{n}\sigma _{j}^{2}\). One of the interesting probabilities is the probability at a particular point, i.e., \(P(S_{n}=k)\), where \(k=1, 2, \ldots \) . There are two density functions, i.e., discretized normal and normal, to approximate this probability. The discretized normal random variable \((\widetilde{Z}_{\mu ,\sigma ^{2}})\) has the probability mass function
where \(Z_{\mu ,\sigma ^{2}}\) is a normal distribution with mean μ and variance \(\sigma ^{2}\).
To approximate \(P(S_{n}=k)\) by using the discretized normal density function, we can apply the Berry–Esseen theorem. Berry [3] and Esseen [4] were the first two mathematicians who gave the bound between \(P(S_{n}\leq k)\) and the normal distribution. Here is their result.
If \(E \vert X_{j} \vert ^{3}\leq \infty \) for \(j=1, 2, \ldots , n\), then
where \(C_{0}\) is an absolute constant.
We can apply (1) to show that
The constant \(C_{0}\) in (2) was found and improved by many mathematicians (see, [3–10] for examples). The best \(C_{0}\) obtained by Shevtsova [8] in 2013 was 0.5583 for the case of non-identically and 0.469 for the case of identically.
The local limit theorem describes how the probability mass function of a sum of independent discrete random variables approaches the normal density.
Let
De Moivre and Laplace (see [11]) established the local limit theoremDe Moivre and Laplace (see [11]) established the local limit theorem for the binomial case in 1754. For sums of independent random variables, we can prove the local limit theorem by using the Berry–Esseen theorem and get the rate convergence \(O (\frac{1}{\sqrt{n}} )\) (see [2]).
In 1971, Ibragimov and Linnik improved the rate of convergence from \(O (\frac{1}{\sqrt{n}} )\) to \(O (\frac{1}{n^{\frac{1}{2}+\alpha }} )\), \(0<\alpha < \frac{1}{2}\), in the case of \(X_{j}\)s being identical and square integrable random variables.
For the non-identical case, Petrov (1975, [1]) showed that if
-
1
\(\sigma ^{2}\to \infty \) as \(n\to \infty \),
-
2
\(\sum_{j=1}^{n}E \vert X_{j}-\mu _{j} \vert ^{3}=O (\sigma ^{2} )\),
-
3
\(P(X_{j}=0)\geq P(X_{j}=m)\) for all j and m and
-
4
\(\gcd \{ m:\frac{1}{\log n}\sum_{j=1}^{n}P(X_{j}=0)P(X_{j}=m) \to \infty \text{ as }n\to \infty \} =1\),
then
Furthermore, Petrov ([1], see also [2]) improved the rate of convergence from \(O (\frac{1}{\sigma ^{2}} )\) to \(O (\frac{1}{n\sqrt{n}} )\) in the case of a symmetric binomial.
In the previous studies, no one gave the explicit constants of error bounds. Most of the theorems were usually presented in the form of O. Therefore, finding the constants has been interesting. In 2018, Zolotukhin, Nagaev, and Chebotarev [12] gave the convergence with a constant of error bound in the case that \(S_{n}\) is a binomial. They showed that
After that Siripraparat and Neammanee [13] relaxed the identically condition and obtained the convergence in the case of Poisson binomial in 2020. Their result is
Furthermore, in the case of \(S_{n}=\operatorname{Bi} (\frac{1}{2} )\) being a symmetric binomial, i.e., \(P(X_{j}=1)=\frac{1}{2}=1-P(X_{j}=0)\), they showed that
In 2017, Giuliano Antonini and Weber [2] gave the rate of convergence \(O (\frac{1}{\sigma } )\) with a constant of error bound in the case of sums of independent lattice random variables. X is a lattice random variable when the value of X is in \(L(a,b)=\{v_{k}\}\), where \(v_{k}=a+bk\), \(k\in \mathbb{Z}\), a and \(b>0\) are real numbers. They gave the following theorem.
Theorem 1.1
(See [2])
Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent square integrable random variables taking values in a lattice \(L(a, b)\) and \(S_{n}=\sum_{j=1}^{n}X_{j}\). Let \(\alpha _{X}=\sum_{k\in \mathbb{Z}}\min \{ P(X=v_{k}), P(X=v_{k+1}) \} \) and \(V_{j}\)s, \(L_{j}\)s, \(\epsilon _{j}\)s be such that
where \(P(L_{j}=0)=P(L_{j}=1)=\frac{1}{2}\), \(P(\epsilon _{j}=1)=1-P(\epsilon _{j}=0)=q_{j}\), where \(0< q_{j}\leq \alpha _{X_{j}}\) for all \(j=1, 2, \ldots , n\), and \((V_{j}, \epsilon _{j})\) and \(L_{j}\) are independent for each \(j=1, 2, \ldots , n\).
Assume that
-
1
\(\frac{\log \lambda _{n}}{\lambda _{n}}\leq \frac{1}{14}\), where \(\lambda _{n}=\sum_{j=1}^{n}q_{j}\)
-
2
\(\frac{(k-ES_{n})^{2}}{\operatorname{Var}(S_{n})}\leq ( \frac{\lambda _{n}}{14\log \lambda _{n}} )^{\frac{1}{2}}\) for all \(k\in L(na, b)\).
Then
where
Note that if we choose the constant of error bound \(C_{3}\) in (5), then \(C_{2}\) is 36.1082 and the rate of Theorem 1.1 is \(O (\frac{1}{\sigma } )\). We can see that the bound of [2] depends on \(C_{3}\) and is still complicated. In this work, we improve the rate of convergence of [2] to be \(O (\frac{1}{\sigma ^{2}} )\) and also give the constant of error bound. Our constants are not complicated and can be applied easily. The results are shown in the following.
Theorem 1.2
Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables and \(\alpha _{j}=2\sum_{l=-\infty }^{\infty }p_{jl}p_{j(l+1)}\), where \(p_{jl}=P(X_{j}=l)\). If \(\alpha _{j}>0\) for all \(j=1, 2, \ldots , n\), then
where \(\tau =\frac{1}{10\sqrt[3]{\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}}\) and \(\alpha =\sum_{j=1}^{n}\alpha _{j}\).
b is said to be maximal when there are no other numbers \(a'\) and \(b'>b\) for which \(P(X\in L(a',b'))=1\).
Theorem 1.3
Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent random variables in a maximal lattice \(L(a, b)\) and
Then
where \(\alpha _{j}=2\sum_{l=-\infty }^{\infty }p_{jl}p_{j(l+1)}\), \(p_{jl}=P(X_{j}=a+bl)\), and \(\alpha =\sum_{j=1}^{n}\alpha _{j}\).
Theorem 1.4
If \(X_{1}, X_{2}, \ldots , X_{n}\) in Theorem 1.3are independent identically distributed (i.i.d.), then
where \(\tau =\frac{1}{10\sqrt[3]{nE \vert X_{1} \vert ^{3}}} \) and \(\alpha =2n\sum_{l=-\infty }^{\infty }p_{l}p_{l+1}\), \(p_{l}=P(X_{1}=a+bl)\).
Observe that the constant in Theorems 1.2–1.4 is easier than the constant in Theorem 1.1.
We organize this paper as follows. In Sect. 2, we give the exponential bounds of a characteristic function which will be used to prove the main theorems in Sect. 3. After that we give some examples in Sect. 4.
2 Exponential bounds of a characteristic function
In this section, we let X be an integral-valued random variable with characteristic function ψ and \(\theta (t)=\) argument of \(\psi (t)\). Then
and
Characteristic functions are important in probability theory and statistics, especially in local limit theorems, stability problems, etc. In the study of local limit theorems, it is required to estimate the bounds for modulus \(\vert \psi (t) \vert \) of a characteristic function ψ. The various bounds for \(\vert \psi (t) \vert \) play a key role in the investigation of the rate of convergence in the local limit theorems. Previous studies have shown the bounds for \(\vert \psi (t) \vert \) in the case of continuous and bounded random variable in a variety of versions (see [14–18] for example). In addition, the bounds for \(\vert \psi (t) \vert \) of a lattice random variable have been shown in a number of research works (see [18–21] for example). Furthermore, there is the exponential bound for \(\vert \psi (t) \vert \) of a Poisson binomial distribution as shown in Neammanee [22]. In this section, we use the idea of Neammanee [22] to obtain the exponential bound for \(\vert \psi (t) \vert \) of an integral-valued random variable. The following lemmas are our results.
Lemma 2.1
Let \(t\in [0, \pi )\) and \(\alpha =2\sum_{j=-\infty }^{\infty }p_{j}p_{j+1}\). Then \(\vert \psi (t) \vert \leq e^{-\frac{1}{\pi ^{2}}\alpha t^{2}}\).
Proof
Let \(t\in [0, \pi )\). If \(\vert \psi (t) \vert =0\), then Lemma 2.1 holds. Assume that \(\vert \psi (t) \vert >0\).
Note that
Since \(\vert \psi (t) \vert ^{2}\) is real, we get
From this fact and the fact that \(\vert \psi (t) \vert >0\), we have
where we use the fact that \(\sin (\frac{t}{2} )\geq \frac{t}{\pi }\) on \([0, \pi )\) in the last inequality.
Hence, \(\vert \psi (t) \vert \leq e^{-\frac{1}{\pi ^{2}}\alpha t^{2}}\). □
Lemma 2.2
For \(t\in [0, \pi ]\),
Proof
The lemma holds if \(\vert \psi (t) \vert =0\). Assume that \(\vert \psi (t) \vert >0\).
Note that
and
where we use the fact that \((a+b)^{k}\leq 2^{k-1}(a^{k}+b^{k})\), \(a, b\geq 0\), and \(k\in \mathbb{N}\) in the first inequality.
From the fact that
Hence, \(\vert \psi (t) \vert \leq e^{-\frac{1}{2}\sigma ^{2}(X)t^{2}+\frac{2}{3}E \vert X \vert ^{3}t^{3}}\). □
Lemma 2.3
Let \(\tau _{1}=\frac{1}{10\sqrt[3]{E \vert X \vert ^{3}}}\). Then
Proof
Since \(\vert \sin (\theta ) \vert \leq \vert \theta \vert \) and (10),
Note that
From (12), (13), and the fact that \(t^{2}\leq \frac{1}{100(E \vert X \vert ^{3})^{\frac{2}{3}}}\),
Therefore,
and
By (9), (12), (13), and (15), we get
Hence, \(\vert \psi (t) \vert \geq e^{-\frac{1}{2}\sigma ^{2}(X)t^{2}-{\frac{2}{3}E \vert X \vert ^{3}t^{3}}} \) for \(t\in [0, \tau _{1} ]\). □
Lemma 2.4
-
1
\(\theta ^{(1)}(0)=EX\).
-
2
\(\theta ^{(2)}(0)=0\).
-
3
\(\vert \theta ^{(3)}(t) \vert \leq 4.2874E \vert X \vert ^{3}\) for \(t \in [0, \tau _{1} ]\).
Proof
1. By (6), we get
2. Let \(A(t)=\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{ \infty }j\cos ((j-l)t )p_{j}p_{l}\) and \(B(t)=\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{ \infty }\cos ((j-l)t )p_{j}p_{l}\).
Observe that
where
Since \(A'(0)=0\) and \(B'(0)=0\), \(\theta ^{(2)}(0)=0\).
3. Note that
similarly to (10), we get
Therefore,
Hence,
and
By (14), we get
From this fact and \(B(t)\leq 1\),
and
By (16), we obtain
From this fact and (17)–(22), we get \(\vert \theta ^{(3)}(t) \vert \leq 4.2874E \vert X \vert ^{3}\). □
3 Proof of the main results
Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables. Let \(S_{n}:=\sum_{i=1}^{n}X_{i}\), \(\mu :=ES_{n}\) and \(\sigma ^{2}:=\operatorname{Var}S_{n}\). Let \(\psi _{1}, \psi _{2}, \ldots , \psi _{n}\) and ψ be the characteristic functions of \(X_{1}, X_{2}, \ldots , X_{n}\) and \(S_{n}\), respectively. Then, for \(j=1, 2, \ldots , n\),
and
Note that \(\psi _{j}(t)= \vert \psi _{j}(t) \vert e^{i\theta _{j}(t)}\),
where \(\theta _{j}(t):=\text{argument of } \psi _{j}(t)=\arctan ( \frac{\sum_{l=-\infty }^{\infty }p_{jl}\sin (lt)}{\sum_{l=-\infty }^{\infty }p_{jl}\cos (lt)} )\).
Hence, \(\psi (t)=\rho (t)e^{i\theta (t)}\), where \(\theta (t)=\sum_{j=1}^{n}\theta _{j}(t)(\bmod 2\pi )\) and \(\rho (t)=\prod_{j=1}^{n} \vert \psi _{j}(t) \vert \).
From Siripraparat and Neammanee [13], we know that
where \(\alpha (t)=\theta (t)-\mu t\).
To prove our main theorems, we give the bound of \(\rho (t)\) and \(\cos ((k-\mu )t-\alpha (t) )\) in Lemma 3.1 and Lemma 3.2, respectively.
Lemma 3.1
Let \(\tau =\min (\frac{1}{10\sqrt[3]{\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}}, \pi )\). Then
Proof
By Lemma 2.2 and Lemma 2.3, we get
By (24), we obtain
Thus,
Hence,
where we have used the fact
Since \(t^{3}\leq \frac{1}{1000\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}\) and (25), \(\vert \rho (t)-e^{-\frac{1}{2}\sigma ^{2}t^{2}} \vert \leq {0.6672 \sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}e^{-\frac{1}{2}\sigma ^{2}t^{2}}}\). □
Lemma 3.2
For \(t\in [0, \tau ]\), we have \(\cos ((k-\mu )t-\alpha (t) )=\cos ((k-\mu )t)+\triangle \), where \(\vert \triangle \vert \leq 0.7152\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}\).
Proof
Using Taylor’s expansion, we have
By Lemma 2.4, (28) and the fact that \(\tau \leq \tau _{1}\), we get
where
By (29) and \(t^{3}\leq \frac{1}{1000\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}\), we obtain \(\vert \alpha (t) \vert \leq \frac{0.7146}{1000}\).
From this fact, (29) and (30) imply that \(\vert \triangle \vert \leq 0.7152\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}\). □
We are now ready to prove Theorem 1.2.
Proof of Theorem 1.2
Note that
where \(\triangle _{1}=\frac{1}{\pi }\int _{\tau }^{\pi }\rho (t) \cos ((k-\mu )t-\alpha (t))\,dt\).
By Lemma 2.1, \(\vert \triangle _{1} \vert \leq \frac{1}{\pi }\int _{\tau }^{\pi } \rho (t)\,dt \leq \frac{1}{\pi }\int _{\tau }^{\infty }e^{- \frac{1}{\pi ^{2}}\alpha t^{2}}\,dt \leq \frac{\pi }{2\tau \alpha }e^{- \frac{\tau ^{2}\alpha }{\pi ^{2}}}\).
From the fact that
and Lemma 3.1, we have
where
From (32) and Lemma 3.2, we get
where
By(31) and (33)–(36), we obtain
where
From (10), we can see that
which implies that \(e^{-\frac{1}{2}\sigma ^{2}t^{2}}\leq e^{-\frac{1}{4}\alpha t^{2}}\). From this fact, we get
From this fact and (37) and (38), we have
where
Using the fact that
(see [13], p. 7), we obtain
By (23), (39), (40), and (41), we can conclude that
where \(\vert \triangle _{6} \vert \leq \frac{2.2075e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}}{\tau \alpha }+{ \frac{1.7898}{\sigma ^{4}}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}\). □
Proof of Theorem 1.3
Let \(Y_{j}=\frac{X_{j}}{b}-\frac{a}{b}\). Then
and
Since b is maximal, we have \(\alpha =\sum_{j=1}^{n}\alpha _{j}>0\),
where \(\alpha _{j}=2\sum_{l=-\infty }^{\infty }p_{jl}p_{j(l+1)}\), \(p_{jl}=P(X_{j}=a+bl)\).
4 Examples of the main results
In this section, we give applications including Poisson binomial, binomial, and negative binomial that our main theorems can be applied as shown in Example 1–Example 3. In addition, the example that our main results can be applied to, unlike the result of Petrov [1], as shown in Example 4.
Example 1
If \(X_{1}, X_{2}, \ldots , X_{n}\) are independent Bernoulli random variables with \(P(X_{j}=1)=p_{j}\) and \(P(X_{j}=0)=q_{j}\), where \(p_{j}+q_{j}=1\) for \(j=1, 2, \ldots , n\), \(S_{n}\) is a Poisson binomial random variable. Then
where \(\mu =\sum_{j=1}^{n}p_{j}\) and \(\sigma ^{2}=\sum_{j=1}^{n}p_{j}q_{j}\).
Proof
Note that \(E \vert X_{j} \vert ^{3}=p_{j}\) and
Hence, by Theorem 1.2, we see that (42) holds. □
Example 2
Let \(S_{n}\sim \operatorname{Bi}(p)\). Then
Proof
We can apply Example 1 by letting \(p_{j}=p\) and \(q_{j}=q\). □
Observe that the results in Example 1 and Example 2 have the same order as (3) and (4) but the constants are bigger. However, (3) and (4) cannot be applied with the following example.
Example 3
If \(X_{1}, X_{2}, \ldots , X_{n}\) are i.i.d. geometric random variables with parameter p. Then
where \(q=1-p\).
Proof
Let ψ be the characteristic function of \(X_{j}\). Then \(\psi (t)=\frac{pe^{it}}{1-qe^{it}}\) and
Hence, \(EX^{3}=\frac{\psi ^{(3)}(0)}{i^{3}}=\frac{p^{2}+6q}{p^{3}}\).
Note that
Hence, by Theorem 1.4, we get (43). □
Example 4
Let \(X_{n}\) be a sequence of independent random variables such that
for all \(j=1, 2, \ldots , n\). Then
Proof
Note that \(E \vert X_{j} \vert ^{3}=\frac{27}{8}\), \(ES_{n}=\frac{9n}{8}\), \(\operatorname{Var}S_{n}=\frac{39n}{64}\), and
Hence, by Theorem 1.2, we see that (44) holds. □
One can see that Theorem 1.2 can be applied to Example 4 and get the rate of convergence \(O (\frac{1}{n} )\), but Petrov’s theorem [1] cannot be applied because this example does not satisfy its assumption 3.
Availability of data and materials
Not applicable.
References
Petrov, V.V.: Sums of Independent Random Variables. Springer, New York (1975). Translated from the Russian by A.A. Brown, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 82
Giuliano Antonini, R., Weber, M.: Approximate local limit theorems with effective rate and application to random walks in random scenery. Bernoulli 23(4B), 3268–3310 (2017)
Berry, A.C.: The accuracy of the Gaussian approximation to the sum of independent variables. Transl. Am. Math. Soc. 49, 122–136 (1941)
Esseen, C.G.: On the Liapounoff limit of error in the theory of probability. Ark. Mat. Astron. Fys. 28A, 1–19 (1942)
Shevtsova, I.G.: An improvement of convergence rate estimates in the Lyapunov theorem. Dokl. Math. 82(3), 862–864 (2010)
Shevtsova, I.G.: Moment-type estimates with an improved structure for the accuracy of the normal approximation to distributions of sums of independent symmetric random variables. Teor. Veroâtn. Primen. 57, 499–532 (2012) (Russian). English transl. Theory Probab. Appl. 57, 468–496 (2013)
Shiganov, I.S.: A refinement of the upper bound of the constant in the remainder term of the central limit theorem. J. Sov. Math. 3, 2545–2550 (1986)
Shevtsova, I.G.: On the absolute constants in the Berry–Esseen inequality and its structural and nonuniform improvements. Inform. Primen. 7(1), 124–125 (2013) (Russian)
Tyurin, I.: A refinement of the remainder in the Lyapunov theorem. Theory Probab. Appl. 56(4), 693–696 (2010)
Van Beeck, P.: An application of Fourier methods to the problem of sharpening the Berry–Esseen inequality. Z. Wahrscheinlichkeitstheor. Verw. Geb. 23, 187–196 (1972)
McDonald, D.R.: The local limit theorem: a historical perspective. JIRSS 4(2), 73–86 (2005)
Zolotukhin, A., Nagaev, S., Chebotarev, V.: On a bound of the absolute constant in the Berry–Esseen inequality for i.i.d. Bernoulli random variables. Mod. Stoch. Theory Appl. 5(3), 385–410 (2018)
Siripraparat, T., Neammanee, K.: A local limit theorem for Poisson binomial random variable. Sci. Asia. https://doi.org/10.2306/scienceasia1513-1874.2021.006
Doob, J.L.: Stochastic Processes. Wiley, New York (1953)
Prokhorov, Yu.V., Rozanov, Yu.A.: Probability Theory. Nauka, Moscow (1973) (in Russian)
Statulevichus, V.A.: Limit theorems for densities and asymptotic decompositions for distributions of sums of independent random variables. Teor. Veroâtn. Primen. 10(4), 645–659 (1965)
Ushakov, N.G.: Lower and upper bounds for characteristic functions. J. Math. Sci. 84, 1179–1189 (1997)
Ushakov, N.G.: Selected Topics in Characteristic Functions. VSP, Utrecht (1999)
Benedicks, M.: An estimate of the modulus of the characteristic function of a lattice distribution with application to remainder term estimates in local limit theorems. Ann. Probab. 3, 162–165 (1975)
Zhang, Z.: An upper bound for characteristic functions of lattice distributions with applications to survival probabilities of quantum states. J. Phys. A, Math. Theor. 40, 131–137 (2007)
Zhang, Z.: Bounds for characteristic functions and Laplace transforms of probability distributions. Theory Probab. Appl. 56(2), 350–358 (2012)
Neammanee, K.: A refinement of normal approximation to Poisson binomial. Int. J. Math. Math. Sci. 5, 717–728 (2005)
Acknowledgements
The authors would like to thank the reviewers for their valuable comments and suggestions.
Funding
This work was supported by the Development and Promotion of Science and Technology Talents Project (DPST).
Author information
Authors and Affiliations
Contributions
The authors contributed equally in writing the final version of this article. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Siripraparat, T., Neammanee, K. An improvement of convergence rate in the local limit theorem for integral-valued random variables. J Inequal Appl 2021, 57 (2021). https://doi.org/10.1186/s13660-021-02590-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02590-2