Skip to main content

An improvement of convergence rate in the local limit theorem for integral-valued random variables

Abstract

Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables, and let \(S_{n}=\sum_{j=1}^{n}X_{j}\). One of the interesting probabilities is the probability at a particular point, i.e., the density of \(S_{n}\). The theorem that gives the estimation of this probability is called the local limit theorem. This theorem can be useful in finance, biology, etc. Petrov (Sums of Independent Random Variables, 1975) gave the rate \(O (\frac{1}{n} )\) of the local limit theorem with finite third moment condition. Most of the bounds of convergence are usually defined with the symbol O. Giuliano Antonini and Weber (Bernoulli 23(4B):3268–3310, 2017) were the first who gave the explicit constant C of error bound \(\frac{C}{\sqrt{n}}\). In this paper, we improve the convergence rate and constants of error bounds in local limit theorem for \(S_{n}\). Our constants are less complicated than before, and thus easy to use.

1 Introduction

Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables with means \(\mu _{j}\) and variances \(\sigma _{j}^{2}\) for \(j=1, 2, \ldots , n\). Let \(S_{n}=\sum_{j=1}^{n}X_{j}\), \(\mu =\sum_{j=1}^{n}\mu _{j}\), and \(\sigma ^{2}=\sum_{j=1}^{n}\sigma _{j}^{2}\). One of the interesting probabilities is the probability at a particular point, i.e., \(P(S_{n}=k)\), where \(k=1, 2, \ldots \) . There are two density functions, i.e., discretized normal and normal, to approximate this probability. The discretized normal random variable \((\widetilde{Z}_{\mu ,\sigma ^{2}})\) has the probability mass function

$$\begin{aligned} P(\widetilde{Z}_{\mu ,\sigma ^{2}}=k) =&P \biggl( \frac{k-\mu -\frac{1}{2}}{\sigma }< Z_{\mu ,\sigma ^{2}} \leq \frac{k-\mu +\frac{1}{2}}{\sigma } \biggr) \\ =&\frac{1}{\sigma \sqrt{2\pi }} \int _{ \frac{k-\mu -\frac{1}{2}}{\sigma }}^{\frac{k-\mu +\frac{1}{2}}{\sigma }}e^{- \frac{x^{2}}{2}}\,dx, \end{aligned}$$

where \(Z_{\mu ,\sigma ^{2}}\) is a normal distribution with mean μ and variance \(\sigma ^{2}\).

To approximate \(P(S_{n}=k)\) by using the discretized normal density function, we can apply the Berry–Esseen theorem. Berry [3] and Esseen [4] were the first two mathematicians who gave the bound between \(P(S_{n}\leq k)\) and the normal distribution. Here is their result.

If \(E \vert X_{j} \vert ^{3}\leq \infty \) for \(j=1, 2, \ldots , n\), then

$$\begin{aligned} \sup_{k\in \mathbb{R}} \biggl\vert P \biggl( \frac{S_{n}-\mu }{\sigma }\leq k \biggr)-\frac{1}{\sqrt{2\pi }} \int _{-\infty }^{k}e^{-\frac{x^{2}}{2}}\,dx \biggr\vert \leq \frac{C_{0}}{\sigma ^{3}}\sum_{j=1}^{n}E \vert X_{j}-\mu _{j} \vert ^{3}, \end{aligned}$$
(1)

where \(C_{0}\) is an absolute constant.

We can apply (1) to show that

$$\begin{aligned} \biggl\vert P(S_{n}=k)-\frac{1}{\sqrt{2\pi }} \int _{ \frac{k-\mu -\frac{1}{2}}{\sigma }}^{\frac{k-\mu +\frac{1}{2}}{\sigma }}e^{- \frac{x^{2}}{2}}\,dx \biggr\vert \leq \frac{2C_{0}}{\sigma ^{3}}\sum_{j=1}^{n}E \vert X_{j}- \mu _{j} \vert ^{3}. \end{aligned}$$
(2)

The constant \(C_{0}\) in (2) was found and improved by many mathematicians (see, [310] for examples). The best \(C_{0}\) obtained by Shevtsova [8] in 2013 was 0.5583 for the case of non-identically and 0.469 for the case of identically.

The local limit theorem describes how the probability mass function of a sum of independent discrete random variables approaches the normal density.

Let

$$ \epsilon _{n}(k)= \biggl\vert P(S_{n}=k)- \frac{1}{\sigma \sqrt{2\pi }}e^{- \frac{(k-\mu )^{2}}{2\sigma ^{2}}} \biggr\vert . $$

De Moivre and Laplace (see [11]) established the local limit theoremDe Moivre and Laplace (see [11]) established the local limit theorem for the binomial case in 1754. For sums of independent random variables, we can prove the local limit theorem by using the Berry–Esseen theorem and get the rate convergence \(O (\frac{1}{\sqrt{n}} )\) (see [2]).

In 1971, Ibragimov and Linnik improved the rate of convergence from \(O (\frac{1}{\sqrt{n}} )\) to \(O (\frac{1}{n^{\frac{1}{2}+\alpha }} )\), \(0<\alpha < \frac{1}{2}\), in the case of \(X_{j}\)s being identical and square integrable random variables.

For the non-identical case, Petrov (1975, [1]) showed that if

  1. 1

    \(\sigma ^{2}\to \infty \) as \(n\to \infty \),

  2. 2

    \(\sum_{j=1}^{n}E \vert X_{j}-\mu _{j} \vert ^{3}=O (\sigma ^{2} )\),

  3. 3

    \(P(X_{j}=0)\geq P(X_{j}=m)\) for all j and m and

  4. 4

    \(\gcd \{ m:\frac{1}{\log n}\sum_{j=1}^{n}P(X_{j}=0)P(X_{j}=m) \to \infty \text{ as }n\to \infty \} =1\),

then

$$\begin{aligned} \epsilon _{n}(k)\leq \frac{C_{1}}{\sigma ^{2}}. \end{aligned}$$

Furthermore, Petrov ([1], see also [2]) improved the rate of convergence from \(O (\frac{1}{\sigma ^{2}} )\) to \(O (\frac{1}{n\sqrt{n}} )\) in the case of a symmetric binomial.

In the previous studies, no one gave the explicit constants of error bounds. Most of the theorems were usually presented in the form of O. Therefore, finding the constants has been interesting. In 2018, Zolotukhin, Nagaev, and Chebotarev [12] gave the convergence with a constant of error bound in the case that \(S_{n}\) is a binomial. They showed that

$$\begin{aligned} \epsilon _{n}(k)\leq \min \biggl\{ \frac{1}{\sigma \sqrt{2e}}, \frac{0.516}{\sigma ^{2}} \biggr\} . \end{aligned}$$
(3)

After that Siripraparat and Neammanee [13] relaxed the identically condition and obtained the convergence in the case of Poisson binomial in 2020. Their result is

$$\begin{aligned} \epsilon _{n}(k)\leq \frac{0.1194}{\sigma ^{2} (1-\frac{3}{4 \sigma } )^{3}}+ \frac{0.0749}{ \sigma ^{3}}+ \frac{0.2107}{\sigma ^{3} (1-\frac{3}{4 \sigma } )^{6}}+ \biggl(\frac{0.4579}{\sqrt{ \sigma }}+ \frac{0.4725}{ \sigma \sqrt{\sigma }} \biggr)e^{-\frac{3 \sigma }{2}}. \end{aligned}$$
(4)

Furthermore, in the case of \(S_{n}=\operatorname{Bi} (\frac{1}{2} )\) being a symmetric binomial, i.e., \(P(X_{j}=1)=\frac{1}{2}=1-P(X_{j}=0)\), they showed that

$$ \epsilon _{n}(k)\leq \frac{0.5992}{n\sqrt{n}}+ \frac{3.3984}{n^{2} (1-\frac{3}{2\sqrt{n}} )^{4}}+ \frac{337.8048}{n^{3}\sqrt{n} (1-\frac{3}{2\sqrt{n}} )^{8}}+ \biggl(\frac{0.6476}{n^{\frac{1}{4}}}+ \frac{1.3365}{n^{\frac{3}{4}}} \biggr)e^{-\frac{3\sqrt{n}}{4}}. $$
(5)

In 2017, Giuliano Antonini and Weber [2] gave the rate of convergence \(O (\frac{1}{\sigma } )\) with a constant of error bound in the case of sums of independent lattice random variables. X is a lattice random variable when the value of X is in \(L(a,b)=\{v_{k}\}\), where \(v_{k}=a+bk\), \(k\in \mathbb{Z}\), a and \(b>0\) are real numbers. They gave the following theorem.

Theorem 1.1

(See [2])

Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent square integrable random variables taking values in a lattice \(L(a, b)\) and \(S_{n}=\sum_{j=1}^{n}X_{j}\). Let \(\alpha _{X}=\sum_{k\in \mathbb{Z}}\min \{ P(X=v_{k}), P(X=v_{k+1}) \} \) and \(V_{j}\)s, \(L_{j}\)s, \(\epsilon _{j}\)s be such that

$$\begin{aligned} V_{j}+\epsilon _{j}bL_{j}\overset{ \boldsymbol{D}}{=}X_{j}\quad \textit{for all } j=1, 2, \ldots , n, \end{aligned}$$

where \(P(L_{j}=0)=P(L_{j}=1)=\frac{1}{2}\), \(P(\epsilon _{j}=1)=1-P(\epsilon _{j}=0)=q_{j}\), where \(0< q_{j}\leq \alpha _{X_{j}}\) for all \(j=1, 2, \ldots , n\), and \((V_{j}, \epsilon _{j})\) and \(L_{j}\) are independent for each \(j=1, 2, \ldots , n\).

Assume that

  1. 1

    \(\frac{\log \lambda _{n}}{\lambda _{n}}\leq \frac{1}{14}\), where \(\lambda _{n}=\sum_{j=1}^{n}q_{j}\)

  2. 2

    \(\frac{(k-ES_{n})^{2}}{\operatorname{Var}(S_{n})}\leq ( \frac{\lambda _{n}}{14\log \lambda _{n}} )^{\frac{1}{2}}\) for all \(k\in L(na, b)\).

Then

$$\begin{aligned} \biggl\vert P(S_{n}=k)-\frac{b}{\sqrt{2\pi \operatorname{Var}(S_{n})}}e^{- \frac{(k-ES_{n})^{2}}{2\operatorname{Var}S_{n}}} \biggr\vert \leq C_{2} \biggl[b \biggl( \frac{\log \lambda _{n}}{\operatorname{Var}(S_{n})\lambda _{n}} \biggr)^{\frac{1}{2}}+ \frac{\delta _{n}+\lambda _{n}^{-1}}{\sqrt{\lambda _{n}}} \biggr], \end{aligned}$$

where

$$\begin{aligned} &C_{2}=2^{\frac{7}{2}}\max \biggl\{ \frac{8}{\sqrt{2\pi }}, C_{3} \biggr\} , \\ &C_{3} \textit{ is the constant} \quad \textit{such that}\quad \sup _{z} \biggl\vert P \biggl(\operatorname{Bi} \biggl( \frac{1}{2} \biggr)=z \biggr)-\sqrt{ \frac{2}{\pi n}}e^{-\frac{(2z-n)^{2}}{2n}} \biggr\vert \leq \frac{C_{3}}{n\sqrt{n}}, \\ &\delta _{n}= \sup_{x\in \mathbb{R}} \biggl\vert P \biggl( \frac{S'_{n}-ES'_{n}}{\sqrt{\operatorname{Var}(S'_{n})}}< x \biggr)-P(Z_{0,1}< x) \biggr\vert , \quad \textit{and} \\ &S'_{n}=W_{n}+\frac{b}{2}B_{n}, W_{n}=\sum_{j=1}^{n}V_{j} \quad \textit{and}\quad B_{n}=\sum_{j=1}^{n} \epsilon _{j}. \end{aligned}$$

Note that if we choose the constant of error bound \(C_{3}\) in (5), then \(C_{2}\) is 36.1082 and the rate of Theorem 1.1 is \(O (\frac{1}{\sigma } )\). We can see that the bound of [2] depends on \(C_{3}\) and is still complicated. In this work, we improve the rate of convergence of [2] to be \(O (\frac{1}{\sigma ^{2}} )\) and also give the constant of error bound. Our constants are not complicated and can be applied easily. The results are shown in the following.

Theorem 1.2

Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables and \(\alpha _{j}=2\sum_{l=-\infty }^{\infty }p_{jl}p_{j(l+1)}\), where \(p_{jl}=P(X_{j}=l)\). If \(\alpha _{j}>0\) for all \(j=1, 2, \ldots , n\), then

$$ \epsilon _{n}(k) \leq \frac{2.2075e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}}{\tau \alpha }+{ \frac{1.7898}{\sigma ^{4}}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}}, $$

where \(\tau =\frac{1}{10\sqrt[3]{\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}}\) and \(\alpha =\sum_{j=1}^{n}\alpha _{j}\).

b is said to be maximal when there are no other numbers \(a'\) and \(b'>b\) for which \(P(X\in L(a',b'))=1\).

Theorem 1.3

Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent random variables in a maximal lattice \(L(a, b)\) and

$$ \delta _{n}(k)= \biggl\vert P(S_{n}=na+kb)- \frac{b}{\sigma \sqrt{2\pi }}e^{- \frac{(b(na+kb)-(\mu -na))^{2}}{2\sigma ^{2}}} \biggr\vert . $$

Then

$$ \delta _{n}(k)\leq \frac{2.2075e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}}{\tau \alpha }+{ \frac{1.7898b^{4}}{\sigma ^{4}}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}}, $$

where \(\alpha _{j}=2\sum_{l=-\infty }^{\infty }p_{jl}p_{j(l+1)}\), \(p_{jl}=P(X_{j}=a+bl)\), and \(\alpha =\sum_{j=1}^{n}\alpha _{j}\).

Theorem 1.4

If \(X_{1}, X_{2}, \ldots , X_{n}\) in Theorem 1.3are independent identically distributed (i.i.d.), then

$$ \delta _{n}(k) \leq \frac{2.2075e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}}{\tau \alpha }+{ \frac{1.7898b^{4}}{n\sigma _{1}^{4}}E \vert X_{1} \vert ^{3}}, $$

where \(\tau =\frac{1}{10\sqrt[3]{nE \vert X_{1} \vert ^{3}}} \) and \(\alpha =2n\sum_{l=-\infty }^{\infty }p_{l}p_{l+1}\), \(p_{l}=P(X_{1}=a+bl)\).

Observe that the constant in Theorems 1.21.4 is easier than the constant in Theorem 1.1.

We organize this paper as follows. In Sect. 2, we give the exponential bounds of a characteristic function which will be used to prove the main theorems in Sect. 3. After that we give some examples in Sect. 4.

2 Exponential bounds of a characteristic function

In this section, we let X be an integral-valued random variable with characteristic function ψ and \(\theta (t)=\) argument of \(\psi (t)\). Then

$$ \psi (t)=\sum_{j=-\infty }^{\infty }p_{j}e^{ijt}, \quad \text{where } p_{j}=P(X=j) \text{ for } t\in \mathbb{R} $$

and

$$\begin{aligned} \theta (t)=\arctan \biggl( \frac{\sum_{j=-\infty }^{\infty }p_{j}\sin (jt)}{\sum_{j=-\infty }^{\infty }p_{j}\cos (jt)} \biggr). \end{aligned}$$
(6)

Characteristic functions are important in probability theory and statistics, especially in local limit theorems, stability problems, etc. In the study of local limit theorems, it is required to estimate the bounds for modulus \(\vert \psi (t) \vert \) of a characteristic function ψ. The various bounds for \(\vert \psi (t) \vert \) play a key role in the investigation of the rate of convergence in the local limit theorems. Previous studies have shown the bounds for \(\vert \psi (t) \vert \) in the case of continuous and bounded random variable in a variety of versions (see [1418] for example). In addition, the bounds for \(\vert \psi (t) \vert \) of a lattice random variable have been shown in a number of research works (see [1821] for example). Furthermore, there is the exponential bound for \(\vert \psi (t) \vert \) of a Poisson binomial distribution as shown in Neammanee [22]. In this section, we use the idea of Neammanee [22] to obtain the exponential bound for \(\vert \psi (t) \vert \) of an integral-valued random variable. The following lemmas are our results.

Lemma 2.1

Let \(t\in [0, \pi )\) and \(\alpha =2\sum_{j=-\infty }^{\infty }p_{j}p_{j+1}\). Then \(\vert \psi (t) \vert \leq e^{-\frac{1}{\pi ^{2}}\alpha t^{2}}\).

Proof

Let \(t\in [0, \pi )\). If \(\vert \psi (t) \vert =0\), then Lemma 2.1 holds. Assume that \(\vert \psi (t) \vert >0\).

Note that

$$\begin{aligned} \bigl\vert \psi (t) \bigr\vert ^{2}&=\psi (t)\overline{\psi (t)} \\ &=\sum_{j=-\infty }^{\infty }p_{j}e^{ijt} \sum_{l=-\infty }^{\infty }p_{l}e^{-ilt} \\ &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }e^{it(j-l)}p_{j}p_{l} \\ &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\cos \bigl((j-l)t \bigr)p_{j}p_{l}+i \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{ \infty }\sin \bigl((j-l)t \bigr)p_{j}p_{l}. \end{aligned}$$

Since \(\vert \psi (t) \vert ^{2}\) is real, we get

$$\begin{aligned} \bigl\vert \psi (t) \bigr\vert ^{2}&=\sum _{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty } \cos \bigl((j-l)t \bigr)p_{j}p_{l} \\ &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty } \biggl(1-2\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr) \biggr)p_{j}p_{l} \\ &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }p_{j}p_{l}-2 \sum _{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty } \sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l} \\ &=1-2\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l}. \end{aligned}$$
(7)

From this fact and the fact that \(\vert \psi (t) \vert >0\), we have

$$\begin{aligned} 0\leq 2\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l}< 1. \end{aligned}$$
(8)

By (7) and (8), we get

$$\begin{aligned} \ln \bigl\vert \psi (t) \bigr\vert &=\frac{1}{2}\ln \Biggl(1-2\sum _{j=-\infty }^{\infty } \sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l} \Biggr) \\ &=-\frac{1}{2}\sum_{k=1}^{\infty } \frac{1}{k} \Biggl[2\sum_{j=-\infty }^{ \infty } \sum_{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l)\frac{t}{2} \biggr)p_{j}p_{l} \Biggr]^{k} \\ &\leq -\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l} \\ &=-\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl( \vert j-l \vert \frac{t}{2} \biggr)p_{j}p_{l} \\ &\leq -\mathop{\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }}_{ \vert j-l \vert \leq 1}\frac{ (j-l )^{2}t^{2}}{\pi ^{2}}p_{j}p_{l} \\ &=-\frac{1}{\pi ^{2}}\alpha t^{2}, \end{aligned}$$
(9)

where we use the fact that \(\sin (\frac{t}{2} )\geq \frac{t}{\pi }\) on \([0, \pi )\) in the last inequality.

Hence, \(\vert \psi (t) \vert \leq e^{-\frac{1}{\pi ^{2}}\alpha t^{2}}\). □

Lemma 2.2

For \(t\in [0, \pi ]\),

$$\begin{aligned} \bigl\vert \psi (t) \bigr\vert \leq e^{-\frac{1}{2}\sigma ^{2}(X)t^{2}+\frac{2}{3}E \vert X \vert ^{3}t^{3}}. \end{aligned}$$

Proof

The lemma holds if \(\vert \psi (t) \vert =0\). Assume that \(\vert \psi (t) \vert >0\).

Note that

$$\begin{aligned} \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }(j-l)^{2}p_{j}p_{l} &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sum_{m=0}^{2} \binom{2 }{m}j^{m}(-l)^{2-m}p_{j}p_{l} \\ &=\sum_{m=0}^{2}(-1)^{2-m} \binom{2}{m}\sum_{j=-\infty }^{\infty }j^{m}p_{j} \sum_{l=-\infty }^{\infty }l^{2-m}p_{l} \\ &=\sum_{m=0}^{2}(-1)^{2-m} \binom{2}{m}EX^{m}EX^{2-m} \\ &=EX^{2}-2(EX)^{2}+EX^{2} \\ &=2\sigma ^{2}(X) \end{aligned}$$
(10)

and

$$\begin{aligned} \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty } \bigl( \vert j \vert + \vert l \vert \bigr)^{3}p_{j}p_{l} \leq 4\sum _{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty } \bigl( \vert j \vert ^{3}+ \vert l \vert ^{3} \bigr)p_{j}p_{l} =8E \vert X \vert ^{3}, \end{aligned}$$
(11)

where we use the fact that \((a+b)^{k}\leq 2^{k-1}(a^{k}+b^{k})\), \(a, b\geq 0\), and \(k\in \mathbb{N}\) in the first inequality.

From the fact that

$$\begin{aligned} \cos (at)=1-\frac{1}{2}a^{2}t^{2}+ \frac{1}{6}a^{3}t^{3}\sin (t_{1})\quad \text{for some } t_{1} \end{aligned}$$

and (9), (10), (11), we get

$$\begin{aligned} &\ln \bigl\vert \psi (t) \bigr\vert \\ &\quad \leq -\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l} \\ &\quad =-\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty } \biggl[ \frac{1}{2}- \frac{1}{2}\cos \bigl((j-l)t \bigr) \biggr]p_{j}p_{l} \\ &\quad =-\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty } \biggl[ \frac{1}{2}- \frac{1}{2} \biggl(1-\frac{1}{2}(j-l)^{2}t^{2}+{{ \frac{1}{6}(j-l)^{3}t^{3}\sin \bigl((j-l)t_{1} \bigr)}} \biggr) \biggr]p_{j}p_{l} \\ &\quad =-\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty } \biggl[ \frac{1}{4}(j-l)^{2}t^{2}-{{ \frac{1}{12}(j-l)^{3}t^{3}\sin \bigl((j-l)t_{1} \bigr)}} \biggr]p_{j}p_{l} \\ &\quad \leq -\frac{t^{2}}{4}\sum_{j=-\infty }^{\infty } \sum_{l=-\infty }^{ \infty }(j-l)^{2}p_{j}p_{l}+{ \frac{t^{3}}{12}\sum_{j=-\infty }^{\infty } \sum _{l=-\infty }^{\infty } \vert j-l \vert ^{3}p_{j}p_{l}} \\ &\quad \leq -\frac{1}{2}\sigma ^{2}(X)t^{2}+{ \frac{t^{3}}{12}\sum_{j=- \infty }^{\infty }\sum _{l=-\infty }^{\infty } \bigl( \vert j \vert + \vert l \vert \bigr)^{3}p_{j}p_{l}} \\ &\quad \leq -\frac{1}{2}\sigma ^{2}(X)t^{2}+ \frac{2}{3}E \vert X \vert ^{3}t^{3}. \end{aligned}$$

Hence, \(\vert \psi (t) \vert \leq e^{-\frac{1}{2}\sigma ^{2}(X)t^{2}+\frac{2}{3}E \vert X \vert ^{3}t^{3}}\). □

Lemma 2.3

Let \(\tau _{1}=\frac{1}{10\sqrt[3]{E \vert X \vert ^{3}}}\). Then

$$\begin{aligned} \bigl\vert \psi (t) \bigr\vert \geq e^{-\frac{1}{2}\sigma ^{2}(X)t^{2}-{\frac{2}{3}E \vert X \vert ^{3}t^{3}}} \quad \textit{for } t \in [0, \tau _{1} ]. \end{aligned}$$

Proof

Since \(\vert \sin (\theta ) \vert \leq \vert \theta \vert \) and (10),

$$\begin{aligned} 2\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l}\leq \frac{t^{2}}{2} \sum_{j=-\infty }^{ \infty }\sum _{l=-\infty }^{\infty }(j-l)^{2}p_{j}p_{l}=t^{2} \sigma ^{2}(X). \end{aligned}$$
(12)

Note that

$$\begin{aligned} {\sigma ^{2}(X)\leq E \bigl(X^{2} \bigr)\leq \bigl(E \vert X \vert ^{3} \bigr)^{\frac{2}{3}}}. \end{aligned}$$
(13)

From (12), (13), and the fact that \(t^{2}\leq \frac{1}{100(E \vert X \vert ^{3})^{\frac{2}{3}}}\),

$$\begin{aligned} 0\leq 2\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l}\leq \frac{1}{100}. \end{aligned}$$

Therefore,

$$\begin{aligned} 1-2\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l}&\geq \frac{99}{100} \end{aligned}$$
(14)

and

$$\begin{aligned} - \frac{1}{1-2\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }\sin ^{2} ((j-l)\frac{t}{2} )p_{j}p_{l}}&\geq -\frac{100}{99}. \end{aligned}$$
(15)

By (9), (12), (13), and (15), we get

$$\begin{aligned} \ln \bigl\vert \psi (t) \bigr\vert &=-\sum_{j=-\infty }^{\infty } \sum_{l=-\infty }^{\infty } \sin ^{2} \biggl((j-l)\frac{t}{2} \biggr)p_{j}p_{l} \\ &\quad {} -\frac{1}{2}\sum_{k=2}^{\infty } \frac{1}{k} \Biggl[2\sum_{j=- \infty }^{\infty } \sum_{l=-\infty }^{\infty }\sin ^{2} \biggl((j-l) \frac{t}{2} \biggr)p_{j}p_{l} \Biggr]^{k} \\ &\geq -\frac{1}{2}\sigma ^{2}(X)t^{2}- \frac{1}{4} \frac{ [2\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }\sin ^{2} ((j-l)\frac{t}{2} )p_{j}p_{l} ]^{2}}{1-2\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }\sin ^{2} ((j-l)\frac{t}{2} )p_{j}p_{l}} \\ &\geq -\frac{1}{2}\sigma ^{2}(X)t^{2}- \frac{1}{4} \biggl( \frac{100}{99} \biggr) \Biggl[2\sum _{j=-\infty }^{\infty }\sum_{l=- \infty }^{\infty } \sin ^{2} \biggl((j-l)\frac{t}{2} \biggr)p_{j}p_{l} \Biggr]^{2} \\ &\geq -\frac{1}{2}\sigma ^{2}(X)t^{2}- \frac{25}{99}\sigma ^{4}(X)t^{4} \\ &\geq -\frac{1}{2}\sigma ^{2}(X)t^{2}-{ \frac{25}{99} \bigl(E \vert X \vert ^{3} \bigr)^{ \frac{4}{3}}t^{4}} \\ &\geq -\frac{1}{2}\sigma ^{2}(X)t^{2}-{ \frac{25}{99} \bigl(E \vert X \vert ^{3} \bigr)^{ \frac{4}{3}} \frac{1}{10\sqrt[3]{E \vert X \vert ^{3}}}t^{3}} \\ &\geq -\frac{1}{2}\sigma ^{2}(X)t^{2}-{ \frac{2}{3}E \vert X \vert ^{3}t^{3}}. \end{aligned}$$

Hence, \(\vert \psi (t) \vert \geq e^{-\frac{1}{2}\sigma ^{2}(X)t^{2}-{\frac{2}{3}E \vert X \vert ^{3}t^{3}}} \) for \(t\in [0, \tau _{1} ]\). □

Lemma 2.4

  1. 1

    \(\theta ^{(1)}(0)=EX\).

  2. 2

    \(\theta ^{(2)}(0)=0\).

  3. 3

    \(\vert \theta ^{(3)}(t) \vert \leq 4.2874E \vert X \vert ^{3}\) for \(t \in [0, \tau _{1} ]\).

Proof

1. By (6), we get

$$\begin{aligned} \theta ^{(1)}(0)= \frac{\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }jp_{j}p_{l}}{\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }p_{j}p_{l}}=EX. \end{aligned}$$

2. Let \(A(t)=\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{ \infty }j\cos ((j-l)t )p_{j}p_{l}\) and \(B(t)=\sum_{j=-\infty }^{\infty }\sum_{l=-\infty }^{ \infty }\cos ((j-l)t )p_{j}p_{l}\).

Observe that

$$\begin{aligned} \theta ^{(1)}(t)=\frac{A(t)}{B(t)}\quad \text{and}\quad \theta ^{(2)}(t)= \frac{B(t)A'(t)-A(t)B'(t)}{(B(t))^{2}}, \end{aligned}$$
(16)

where

$$\begin{aligned}& A'(t)=-\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }j(j-l) \sin \bigl((j-l)t \bigr)p_{j}p_{l}\quad \text{and} \\& B'(t)=-\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }(j-l) \sin \bigl((j-l)t \bigr)p_{j}p_{l}. \end{aligned}$$

Since \(A'(0)=0\) and \(B'(0)=0\), \(\theta ^{(2)}(0)=0\).

3. Note that

$$\begin{aligned} \bigl\vert A(t) \bigr\vert &= \Biggl\vert \sum _{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }j \cos \bigl((j-l)t \bigr)p_{j}p_{l} \Biggr\vert \leq E \vert X \vert , \end{aligned}$$
(17)

similarly to (10), we get

$$\begin{aligned} \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }j(j-l)^{2}p_{j}p_{l}=EX^{3}-EX^{2}EX. \end{aligned}$$

Therefore,

$$\begin{aligned} \Biggl\vert \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }j(j-l)^{2}p_{j}p_{l} \Biggr\vert \leq 2E \vert X \vert ^{3}. \end{aligned}$$

Hence,

$$\begin{aligned} \bigl\vert A'(t) \bigr\vert =& \Biggl\vert -\sum _{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }j(j-l) \sin \bigl((j-l)t \bigr)p_{j}p_{l} \Biggr\vert \\ \leq & \Biggl\vert \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }j(j-l)^{2}tp_{j}p_{l} \Biggr\vert \\ \leq &\tau _{1} \Biggl\vert \sum_{j=-\infty }^{\infty } \sum_{l=-\infty }^{ \infty }j(j-l)^{2}p_{j}p_{l} \Biggr\vert \\ \leq &2\tau _{1} E \vert X \vert ^{3} \\ \leq &\frac{2}{10\sqrt[3]{E \vert X \vert ^{3}}}E \vert X \vert ^{3} \\ =&\frac{1}{5} \bigl(E \vert X \vert ^{3} \bigr)^{\frac{2}{3}} \end{aligned}$$
(18)

and

$$\begin{aligned} \bigl\vert A''(t) \bigr\vert &= \Biggl\vert - \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }j(j-l)^{2} \cos \bigl((j-l)t \bigr)p_{j}p_{l} \Biggr\vert \\ &\leq \Biggl\vert \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }j(j-l)^{2}p_{j}p_{l} \Biggr\vert \\ &\leq 2E \vert X \vert ^{3}. \end{aligned}$$
(19)

By (14), we get

$$\begin{aligned} B(t)&=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\cos \bigl((j-l)t \bigr)p_{j}p_{l} \\ &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty } \biggl(1-2\sin ^{2} \biggl( \frac{(j-l)t}{2} \biggr) \biggr)p_{j}p_{l} \\ &=\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }p_{j}p_{l}-2 \sum _{i}\sum_{j}\sin ^{2} \biggl(\frac{(j-l)t}{2} \biggr)p_{j}p_{l} \\ &=1-2\sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }\sin ^{2} \biggl( \frac{(j-l)t}{2} \biggr)p_{j}p_{l} \\ &\geq \frac{99}{100}. \end{aligned}$$

From this fact and \(B(t)\leq 1\),

$$\begin{aligned} \frac{99}{100}\leq B(t)\leq 1. \end{aligned}$$
(20)

By (10) and (13), we obtain

$$\begin{aligned} \bigl\vert B'(t) \bigr\vert &= \Biggl\vert -\sum _{j=-\infty }^{\infty }\sum_{l=-\infty }^{\infty }(j-l) \sin \bigl((j-l)t \bigr)p_{j}p_{l} \Biggr\vert \\ &\leq \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }(j-l)^{2}tp_{j}p_{l} \\ &=2\tau _{1}\sigma ^{2}(X) \\ &\leq \frac{2}{10\sqrt[3]{E \vert X \vert ^{3}}} \bigl(E \vert X \vert ^{3} \bigr)^{\frac{2}{3}} \\ &=\frac{1}{5} \bigl(E \vert X \vert ^{3} \bigr)^{\frac{1}{3}} \end{aligned}$$
(21)

and

$$\begin{aligned} \bigl\vert B''(t) \bigr\vert &= \Biggl\vert - \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }(j-l)^{2} \cos \bigl((j-l)t \bigr)p_{j}p_{l} \Biggr\vert \\ &\leq \sum_{j=-\infty }^{\infty }\sum _{l=-\infty }^{\infty }(j-l)^{2}p_{j}p_{l} \\ &=2\sigma ^{2}(X) \\ &\leq 2 \bigl(E \vert X \vert ^{3} \bigr)^{\frac{2}{3}}. \end{aligned}$$
(22)

By (16), we obtain

$$\begin{aligned} \theta ^{(3)}(t) = \frac{(B(t))^{2}A''(t)-B(t)A(t)B''(t)-2B'(t)B(t)A'(t)-2A(t)(B'(t))^{2}}{(B(t))^{3}}. \end{aligned}$$

From this fact and (17)–(22), we get \(\vert \theta ^{(3)}(t) \vert \leq 4.2874E \vert X \vert ^{3}\). □

3 Proof of the main results

Let \(X_{1}, X_{2}, \ldots , X_{n}\) be independent integral-valued random variables. Let \(S_{n}:=\sum_{i=1}^{n}X_{i}\), \(\mu :=ES_{n}\) and \(\sigma ^{2}:=\operatorname{Var}S_{n}\). Let \(\psi _{1}, \psi _{2}, \ldots , \psi _{n}\) and ψ be the characteristic functions of \(X_{1}, X_{2}, \ldots , X_{n}\) and \(S_{n}\), respectively. Then, for \(j=1, 2, \ldots , n\),

$$\begin{aligned} \psi _{j}(t)&=\sum_{l=-\infty }^{\infty }p_{jl}e^{ilt}= \sum_{l=- \infty }^{\infty }p_{jl}\cos (lt)+i \sum_{l=-\infty }^{\infty }p_{jl} \sin (lt) \end{aligned}$$

and

$$\begin{aligned} \psi (t)= \prod_{j=1}^{n}\psi _{j}(t). \end{aligned}$$

Note that \(\psi _{j}(t)= \vert \psi _{j}(t) \vert e^{i\theta _{j}(t)}\),

where \(\theta _{j}(t):=\text{argument of } \psi _{j}(t)=\arctan ( \frac{\sum_{l=-\infty }^{\infty }p_{jl}\sin (lt)}{\sum_{l=-\infty }^{\infty }p_{jl}\cos (lt)} )\).

Hence, \(\psi (t)=\rho (t)e^{i\theta (t)}\), where \(\theta (t)=\sum_{j=1}^{n}\theta _{j}(t)(\bmod 2\pi )\) and \(\rho (t)=\prod_{j=1}^{n} \vert \psi _{j}(t) \vert \).

From Siripraparat and Neammanee [13], we know that

$$\begin{aligned} P(S_{n}=k)=\frac{1}{\pi } \int _{0}^{\pi }\rho (t)\cos \bigl((k-\mu )t- \alpha (t) \bigr)\,dt, \end{aligned}$$
(23)

where \(\alpha (t)=\theta (t)-\mu t\).

To prove our main theorems, we give the bound of \(\rho (t)\) and \(\cos ((k-\mu )t-\alpha (t) )\) in Lemma 3.1 and Lemma 3.2, respectively.

Lemma 3.1

Let \(\tau =\min (\frac{1}{10\sqrt[3]{\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}}, \pi )\). Then

$$\begin{aligned} \bigl\vert \rho (t)-e^{-\frac{1}{2}\sigma ^{2}t^{2}} \bigr\vert \leq {0.6672\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}e^{- \frac{1}{2}\sigma ^{2}t^{2}}}\quad \textit{for } t\in [0, \tau ). \end{aligned}$$

Proof

By Lemma 2.2 and Lemma 2.3, we get

$$\begin{aligned} &e^{-\frac{1}{2}\sigma ^{2}(X_{j})t^{2}-{{\frac{2}{3}E \vert X_{j} \vert ^{3}t^{3}}}} \leq \bigl\vert \psi _{j}(t) \bigr\vert \leq e^{-\frac{1}{2}\sigma ^{2}(X_{j})t^{2}+ \frac{2}{3}E \vert X_{j} \vert ^{3}t^{3}}. \end{aligned}$$
(24)

By (24), we obtain

$$\begin{aligned} e^{-\frac{1}{2}\sigma ^{2}t^{2}-{\frac{2}{3}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}} \leq \rho (t)\leq e^{-\frac{1}{2}\sigma ^{2}t^{2}+{\frac{2}{3}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}}. \end{aligned}$$

Thus,

$$\begin{aligned} \bigl(e^{-{\frac{2}{3}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}}-1 \bigr)e^{- \frac{1}{2}\sigma ^{2}t^{2}} &\leq \rho (t)-e^{-\frac{1}{2}\sigma ^{2}t^{2}} \\ &\leq \bigl(e^{{\frac{2}{3}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}}-1 \bigr)e^{-\frac{1}{2}\sigma ^{2}t^{2}}. \end{aligned}$$

Hence,

$$\begin{aligned} -{\frac{2}{3}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}e^{-\frac{1}{2}\sigma ^{2}t^{2}}&\leq \rho (t)-e^{-\frac{1}{2}\sigma ^{2}t^{2}} \\ &\leq {\frac{2}{3}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}e^{{\frac{2}{3} \sum _{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}}}e^{-\frac{1}{2}\sigma ^{2}t^{2}}, \end{aligned}$$
(25)

where we have used the fact

$$\begin{aligned} e^{x}-1\leq xe^{x}\quad \text{and} \quad e^{-x}-1>-x\quad \text{for } x>0. \end{aligned}$$

Since \(t^{3}\leq \frac{1}{1000\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}\) and (25), \(\vert \rho (t)-e^{-\frac{1}{2}\sigma ^{2}t^{2}} \vert \leq {0.6672 \sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}e^{-\frac{1}{2}\sigma ^{2}t^{2}}}\). □

Lemma 3.2

For \(t\in [0, \tau ]\), we have \(\cos ((k-\mu )t-\alpha (t) )=\cos ((k-\mu )t)+\triangle \), where \(\vert \triangle \vert \leq 0.7152\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}\).

Proof

Using Taylor’s expansion, we have

$$\begin{aligned}& \cos \bigl(\alpha (t) \bigr)=1-\frac{1}{2}\cos (t_{2}) \bigl( \alpha (t) \bigr)^{2}\quad \text{for some } t_{2}, \end{aligned}$$
(26)
$$\begin{aligned}& \sin \bigl(\alpha (t) \bigr)=\alpha (t)-\frac{1}{2}\sin (t_{3}) \bigl(\alpha (t) \bigr)^{2} \quad \text{for some } t_{3}, \quad \text{and} \end{aligned}$$
(27)
$$\begin{aligned}& \theta _{j}(t)=\theta _{j}^{(1)}(0)t+ \frac{1}{2}\theta _{j}^{(2)}(0)t^{2}+ \frac{1}{6}\theta _{j}^{(3)}(t_{4})t^{3} \quad \text{for some } t_{4}. \end{aligned}$$
(28)

By Lemma 2.4, (28) and the fact that \(\tau \leq \tau _{1}\), we get

$$\begin{aligned} \bigl\vert \alpha (t) \bigr\vert &\leq \frac{1}{6}\sum _{j=1}^{n}2.1437 \bigl(E \vert X_{j} \vert \sigma ^{2}(X_{j})+E \vert X_{j} \vert ^{3} \bigr)t^{3} \\ &\leq 0.7146\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}. \end{aligned}$$
(29)

By (26) and (27), we obtain

$$\begin{aligned} &\cos \bigl((k-\mu )t-\alpha (t) \bigr) \\ &\quad =\cos \bigl((k-\mu )t \bigr)\cos \bigl(\alpha (t) \bigr)+\sin \bigl((k- \mu )t \bigr)\sin \bigl(\alpha (t) \bigr) \\ &\quad =\cos \bigl((k-\mu )t \bigr) \biggl[1-\frac{1}{2}\cos (t_{2})\alpha ^{2}(t) \biggr]+\sin \bigl((k-\mu )t \bigr) \biggl[\alpha (t)-\frac{1}{2}\sin (t_{3}) \alpha ^{2}(t) \biggr] \\ &\quad =\cos \bigl((k-\mu )t \bigr)+\triangle , \end{aligned}$$

where

$$\begin{aligned} \vert \triangle \vert \leq \bigl\vert \alpha (t) \bigr\vert +\alpha ^{2}(t). \end{aligned}$$
(30)

By (29) and \(t^{3}\leq \frac{1}{1000\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}\), we obtain \(\vert \alpha (t) \vert \leq \frac{0.7146}{1000}\).

From this fact, (29) and (30) imply that \(\vert \triangle \vert \leq 0.7152\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}t^{3}\). □

We are now ready to prove Theorem 1.2.

Proof of Theorem 1.2

Note that

$$\begin{aligned} \frac{1}{\pi } \int _{0}^{\pi }\rho (t)\cos \bigl((k-\mu )t-\alpha (t) \bigr)\,dt= \frac{1}{\pi } \int _{0}^{\tau }\rho (t)\cos \bigl((k-\mu )t-\alpha (t) \bigr)\,dt+ \triangle _{1}, \end{aligned}$$
(31)

where \(\triangle _{1}=\frac{1}{\pi }\int _{\tau }^{\pi }\rho (t) \cos ((k-\mu )t-\alpha (t))\,dt\).

By Lemma 2.1, \(\vert \triangle _{1} \vert \leq \frac{1}{\pi }\int _{\tau }^{\pi } \rho (t)\,dt \leq \frac{1}{\pi }\int _{\tau }^{\infty }e^{- \frac{1}{\pi ^{2}}\alpha t^{2}}\,dt \leq \frac{\pi }{2\tau \alpha }e^{- \frac{\tau ^{2}\alpha }{\pi ^{2}}}\).

From the fact that

$$\begin{aligned} \int _{0}^{\infty }t^{3}e^{-\frac{1}{2}\sigma ^{2}t^{2}}\,dt= \frac{2}{\sigma ^{4}} \end{aligned}$$
(32)

and Lemma 3.1, we have

$$\begin{aligned} \frac{1}{\pi } \int _{0}^{\tau }\rho (t)\cos \bigl((k-\mu )t-\alpha (t) \bigr)\,dt= \frac{1}{\pi } \int _{0}^{\tau }e^{-\frac{1}{2}\sigma ^{2}t^{2}}\cos \bigl((k- \mu )t- \alpha (t) \bigr)\,dt+\triangle _{2}, \end{aligned}$$
(33)

where

$$\begin{aligned} \vert \triangle _{2} \vert &\leq {0.6672\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}} \int _{0}^{ \tau }t^{3}e^{-\frac{1}{2}\sigma ^{2}t^{2}}\,dt \\ &\leq {0.6672\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}} \int _{0}^{\infty }t^{3}e^{- \frac{1}{2}\sigma ^{2}t^{2}}\,dt \\ &={\frac{1.3344}{\sigma ^{4}}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}. \end{aligned}$$
(34)

From (32) and Lemma 3.2, we get

$$\begin{aligned} \frac{1}{\pi } \int _{0}^{\tau }e^{-\frac{1}{2}\sigma ^{2}t^{2}}\cos \bigl((k- \mu )t- \alpha (t) \bigr)\,dt=\frac{1}{\pi } \int _{0}^{\tau }e^{-\frac{1}{2} \sigma ^{2}t^{2}}\cos \bigl((k-\mu )t \bigr)\,dt+\triangle _{3}, \end{aligned}$$
(35)

where

$$\begin{aligned} \vert \triangle _{3} \vert \leq \frac{0.7152}{\pi } \sum_{j=1}^{n}E \vert X_{j} \vert ^{3} \int _{0}^{\infty }t^{3}e^{-\frac{1}{2}\sigma ^{2}t^{2}}\,dt= \frac{0.4554}{\sigma ^{4}}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}. \end{aligned}$$
(36)

By(31) and (33)–(36), we obtain

$$\begin{aligned} \frac{1}{\pi } \int _{0}^{\pi }\rho (t)\cos \bigl((k-\mu )t-\alpha (t) \bigr)\,dt= \frac{1}{\pi } \int _{0}^{\tau }e^{-\frac{1}{2}\sigma ^{2}t^{2}}\cos \bigl((k- \mu )t \bigr)\,dt+\triangle _{4}, \end{aligned}$$
(37)

where

$$\begin{aligned} \vert \triangle _{4} \vert &\leq \vert \triangle _{1} \vert + \vert \triangle _{2} \vert + \vert \triangle _{3} \vert \\ &\leq \frac{\pi }{2\tau \alpha }e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}+{ \frac{1.3344}{\sigma ^{4}}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}}+ \frac{0.4554}{\sigma ^{4}}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3} \\ &=\frac{\pi }{2\tau \alpha }e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}+ \frac{1.7898}{\sigma ^{4}}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}. \end{aligned}$$
(38)

From (10), we can see that

$$\begin{aligned} \alpha =2\sum_{j=1}^{n}\sum _{l=-\infty }^{\infty }p_{jl}p_{j(l+1)} = \sum _{j=1}^{n} \Biggl(\mathop{\sum _{l=-\infty }^{\infty }\sum_{m=- \infty }^{\infty }}_{ \vert l-m \vert \leq 1} (l-m )^{2}p_{jl}p_{jm} \Biggr) \leq 2\sigma ^{2}, \end{aligned}$$

which implies that \(e^{-\frac{1}{2}\sigma ^{2}t^{2}}\leq e^{-\frac{1}{4}\alpha t^{2}}\). From this fact, we get

$$\begin{aligned} \frac{1}{\pi } \biggl\vert \int _{\tau }^{\infty }e^{-\frac{1}{2}\sigma ^{2}t^{2}} \cos \bigl((k-\mu )t \bigr)\,dt \biggr\vert &\leq \frac{1}{\pi } \int _{\tau }^{\infty }e^{- \frac{1}{2}\sigma ^{2}t^{2}}\,dt \\ &\leq \frac{1}{\pi } \int _{\tau }^{\infty }e^{-\frac{1}{4}\alpha t^{2}}\,dt \\ &\leq \frac{1}{\pi \tau } \int _{\tau }^{\infty }te^{-\frac{1}{4}\alpha t^{2}}\,dt \\ &= \frac{2}{\pi \tau \alpha }e^{-\frac{\tau ^{2}\alpha }{4}} . \end{aligned}$$

From this fact and (37) and (38), we have

$$\begin{aligned} \frac{1}{\pi } \int _{0}^{\pi }\rho (t)\cos \bigl((k-\mu )t-\alpha (t) \bigr)\,dt= \frac{1}{\pi } \int _{0}^{\infty }e^{-\frac{1}{2}\sigma ^{2}t^{2}}\cos \bigl((k- \mu )t \bigr)\,dt+\triangle _{5}, \end{aligned}$$
(39)

where

$$\begin{aligned} \vert \triangle _{5} \vert &\leq \vert \triangle _{4} \vert +\frac{1}{\pi } \biggl\vert \int _{ \tau }^{\infty }e^{-\frac{1}{2}\sigma ^{2}t^{2}}\cos \bigl((k-\mu )t \bigr)\,dt \biggr\vert \\ &\leq \frac{\pi }{2\tau \alpha }e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}+ \frac{1.7898}{\sigma ^{4}}\sum _{j=1}^{n}E \vert X_{j} \vert ^{3}+ \frac{2}{\pi \tau \alpha }e^{-\frac{\tau ^{2}\alpha }{4}}. \end{aligned}$$
(40)

Using the fact that

$$ \int _{0}^{\infty }e^{-at^{2}}\cos (bt)\,dt= \frac{1}{2}\sqrt{ \frac{\pi }{a}}e^{-\frac{b^{2}}{4a}}\quad \text{for } a>0 $$

(see [13], p. 7), we obtain

$$\begin{aligned} \frac{1}{\pi } \int _{0}^{\infty }e^{-\frac{1}{2}\sigma ^{2}t^{2}}\cos \bigl((k-\mu )t \bigr)\,dt =\frac{1}{\sigma \sqrt{2\pi }}e^{- \frac{(k-\mu )^{2}}{2\sigma ^{2}}}. \end{aligned}$$
(41)

By (23), (39), (40), and (41), we can conclude that

$$\begin{aligned} P(S_{n}=k) =\frac{1}{\sigma \sqrt{2\pi }}e^{- \frac{(k-\mu )^{2}}{2\sigma ^{2}}}+\triangle _{6}, \end{aligned}$$

where \(\vert \triangle _{6} \vert \leq \frac{2.2075e^{-\frac{\tau ^{2}\alpha }{\pi ^{2}}}}{\tau \alpha }+{ \frac{1.7898}{\sigma ^{4}}\sum_{j=1}^{n}E \vert X_{j} \vert ^{3}}\). □

Proof of Theorem 1.3

Let \(Y_{j}=\frac{X_{j}}{b}-\frac{a}{b}\). Then

$$\begin{aligned}& E \Biggl( \sum_{j=1}^{n}Y_{j} \Biggr)= \frac{\mu -na}{b},\quad\quad \operatorname{Var} \Biggl( \sum _{j=1}^{n}Y_{j} \Biggr)=\frac{\sigma ^{2}}{b^{2}}, \\& P(S_{n}=na+kb) =P \Biggl(\sum_{j=1}^{n}Y_{j}=k \Biggr) \end{aligned}$$

and

$$\begin{aligned} P(Y_{j}=k)=P \biggl(\frac{X_{j}}{b}-\frac{a}{b}=k \biggr)=P(X_{j}=a+bk). \end{aligned}$$

Since b is maximal, we have \(\alpha =\sum_{j=1}^{n}\alpha _{j}>0\),

where \(\alpha _{j}=2\sum_{l=-\infty }^{\infty }p_{jl}p_{j(l+1)}\), \(p_{jl}=P(X_{j}=a+bl)\).

From Theorem 1.2, we obtain Theorem 1.3. □

4 Examples of the main results

In this section, we give applications including Poisson binomial, binomial, and negative binomial that our main theorems can be applied as shown in Example 1–Example 3. In addition, the example that our main results can be applied to, unlike the result of Petrov [1], as shown in Example 4.

Example 1

If \(X_{1}, X_{2}, \ldots , X_{n}\) are independent Bernoulli random variables with \(P(X_{j}=1)=p_{j}\) and \(P(X_{j}=0)=q_{j}\), where \(p_{j}+q_{j}=1\) for \(j=1, 2, \ldots , n\), \(S_{n}\) is a Poisson binomial random variable. Then

$$\begin{aligned} \biggl\vert P(S_{n}=k)-\frac{1}{\sigma \sqrt{2\pi }}e^{- \frac{(k-\mu )^{2}}{2\sigma ^{2}}} \biggr\vert &\leq \frac{11.0375\sqrt[3]{\mu }e^{-\frac{\sigma ^{2}}{50\pi ^{2}(\sqrt[3]{\mu })^{2}}}}{\sigma ^{2}}+ \frac{1.7898\mu }{\sigma ^{4}}, \end{aligned}$$
(42)

where \(\mu =\sum_{j=1}^{n}p_{j}\) and \(\sigma ^{2}=\sum_{j=1}^{n}p_{j}q_{j}\).

Proof

Note that \(E \vert X_{j} \vert ^{3}=p_{j}\) and

$$\begin{aligned} \alpha&=\sum_{j=1}^{n}\alpha _{j} =2 \sum_{j=1}^{n} \sum _{l=0}^{1}p_{j,l}p_{j,l+1} \\ &=2 \sum_{j=1}^{n}\sum _{l=0}^{1}P(X_{j}=l)P(X_{j}=l+1) \\ &=2\sum_{j=1}^{n}P(X_{j}=0)P(X_{j}=1) \\ &=2\sum_{j=1}^{n}p_{j}q_{j} \\ &=2\sigma ^{2}. \end{aligned}$$

Hence, by Theorem 1.2, we see that (42) holds. □

Example 2

Let \(S_{n}\sim \operatorname{Bi}(p)\). Then

$$\begin{aligned} \biggl\vert P(S_{n}=k)-\frac{1}{\sqrt{2\pi npq}}e^{- \frac{(k-np)^{2}}{2npq}} \biggr\vert &\leq \frac{11.0375\sqrt[3]{p}e^{-\frac{npq}{50\pi ^{2}(\sqrt[3]{np})^{2}}}}{ n^{\frac{2}{3}}pq}+{ \frac{1.7898}{npq^{2}}}. \end{aligned}$$

Proof

We can apply Example 1 by letting \(p_{j}=p\) and \(q_{j}=q\). □

Observe that the results in Example 1 and Example 2 have the same order as (3) and (4) but the constants are bigger. However, (3) and (4) cannot be applied with the following example.

Example 3

If \(X_{1}, X_{2}, \ldots , X_{n}\) are i.i.d. geometric random variables with parameter p. Then

$$\begin{aligned} & \biggl\vert P(S_{n}=k)-\frac{p}{\sqrt{2\pi q}}e^{-\frac{(kp-1)^{2}}{2q}} \biggr\vert \\ &\quad \leq \frac{11.0375(1+q)\sqrt[3]{p^{2}+6q}e^{-\frac{np^{3}q}{50\pi ^{2}(1+q)(\sqrt[3]{n(p^{2}+6q)})^{2}}}}{ n^{\frac{2}{3}}p^{2}q}+{ \frac{1.7898p(p^{2}+6q)}{nq^{2}}}, \end{aligned}$$
(43)

where \(q=1-p\).

Proof

Let ψ be the characteristic function of \(X_{j}\). Then \(\psi (t)=\frac{pe^{it}}{1-qe^{it}}\) and

$$\begin{aligned} \psi ^{(3)}(t)&=- \frac{ipe^{it} (q^{2}e^{2it}+4qe^{it}+1 )}{ (1-qe^{it} )^{4}}. \end{aligned}$$

Hence, \(EX^{3}=\frac{\psi ^{(3)}(0)}{i^{3}}=\frac{p^{2}+6q}{p^{3}}\).

Note that

$$\begin{aligned} \alpha &=2n \sum_{l=1}^{\infty }p_{1,l}p_{1,l+1}=2n \sum_{l=1}^{\infty }P(X_{1}=l)P(X_{1}=l+1) \\ &=2n\frac{p^{2}}{q}\sum_{l=1}^{\infty }q^{2l} \\ &=\frac{2npq}{1+q}. \end{aligned}$$

Hence, by Theorem 1.4, we get (43). □

Example 4

Let \(X_{n}\) be a sequence of independent random variables such that

$$\begin{aligned} P(X_{j}=0)=\frac{1}{4}, P(X_{j}=1)= \frac{3}{8}\quad \text{and}\quad P(X_{j}=2)= \frac{3}{8} \end{aligned}$$

for all \(j=1, 2, \ldots , n\). Then

$$\begin{aligned} \biggl\vert P(S_{n}=k)-\frac{0.5111}{\sqrt{n}}e^{-\frac{(8k-9n)^{2}}{72n}} \biggr\vert &\leq \frac{0.069}{n^{\frac{2}{3}}}e^{-0.00022n^{\frac{1}{3}}}+ \frac{16.2671}{n}. \end{aligned}$$
(44)

Proof

Note that \(E \vert X_{j} \vert ^{3}=\frac{27}{8}\), \(ES_{n}=\frac{9n}{8}\), \(\operatorname{Var}S_{n}=\frac{39n}{64}\), and

$$\begin{aligned} \alpha &=\sum_{j=1}^{n}\alpha _{j} =2 \sum_{j=1}^{n} \sum _{l=0}^{2}p_{j,l}p_{j,l+1} \\ &=2 \sum_{j=1}^{n}\sum _{l=0}^{2}P(X_{j}=l)P(X_{j}=l+1) \\ &=2\sum_{j=1}^{n} \bigl(P(X_{j}=0)P(X_{j}=1)+P(X_{j}=1)P(X_{j}=2) \bigr) \\ &=2\sum_{j=1}^{n} \biggl(\frac{1}{4} \times \frac{3}{8}+\frac{3}{8} \times \frac{3}{8} \biggr) \\ &=\frac{15n}{32}. \end{aligned}$$

Hence, by Theorem 1.2, we see that (44) holds. □

One can see that Theorem 1.2 can be applied to Example 4 and get the rate of convergence \(O (\frac{1}{n} )\), but Petrov’s theorem [1] cannot be applied because this example does not satisfy its assumption 3.

Availability of data and materials

Not applicable.

References

  1. Petrov, V.V.: Sums of Independent Random Variables. Springer, New York (1975). Translated from the Russian by A.A. Brown, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 82

    Book  MATH  Google Scholar 

  2. Giuliano Antonini, R., Weber, M.: Approximate local limit theorems with effective rate and application to random walks in random scenery. Bernoulli 23(4B), 3268–3310 (2017)

    MathSciNet  MATH  Google Scholar 

  3. Berry, A.C.: The accuracy of the Gaussian approximation to the sum of independent variables. Transl. Am. Math. Soc. 49, 122–136 (1941)

    Article  MATH  Google Scholar 

  4. Esseen, C.G.: On the Liapounoff limit of error in the theory of probability. Ark. Mat. Astron. Fys. 28A, 1–19 (1942)

    MathSciNet  MATH  Google Scholar 

  5. Shevtsova, I.G.: An improvement of convergence rate estimates in the Lyapunov theorem. Dokl. Math. 82(3), 862–864 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Shevtsova, I.G.: Moment-type estimates with an improved structure for the accuracy of the normal approximation to distributions of sums of independent symmetric random variables. Teor. Veroâtn. Primen. 57, 499–532 (2012) (Russian). English transl. Theory Probab. Appl. 57, 468–496 (2013)

    Article  Google Scholar 

  7. Shiganov, I.S.: A refinement of the upper bound of the constant in the remainder term of the central limit theorem. J. Sov. Math. 3, 2545–2550 (1986)

    Article  MATH  Google Scholar 

  8. Shevtsova, I.G.: On the absolute constants in the Berry–Esseen inequality and its structural and nonuniform improvements. Inform. Primen. 7(1), 124–125 (2013) (Russian)

    Google Scholar 

  9. Tyurin, I.: A refinement of the remainder in the Lyapunov theorem. Theory Probab. Appl. 56(4), 693–696 (2010)

    Article  MATH  Google Scholar 

  10. Van Beeck, P.: An application of Fourier methods to the problem of sharpening the Berry–Esseen inequality. Z. Wahrscheinlichkeitstheor. Verw. Geb. 23, 187–196 (1972)

    Article  MathSciNet  Google Scholar 

  11. McDonald, D.R.: The local limit theorem: a historical perspective. JIRSS 4(2), 73–86 (2005)

    MATH  Google Scholar 

  12. Zolotukhin, A., Nagaev, S., Chebotarev, V.: On a bound of the absolute constant in the Berry–Esseen inequality for i.i.d. Bernoulli random variables. Mod. Stoch. Theory Appl. 5(3), 385–410 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Siripraparat, T., Neammanee, K.: A local limit theorem for Poisson binomial random variable. Sci. Asia. https://doi.org/10.2306/scienceasia1513-1874.2021.006

  14. Doob, J.L.: Stochastic Processes. Wiley, New York (1953)

    MATH  Google Scholar 

  15. Prokhorov, Yu.V., Rozanov, Yu.A.: Probability Theory. Nauka, Moscow (1973) (in Russian)

    Google Scholar 

  16. Statulevichus, V.A.: Limit theorems for densities and asymptotic decompositions for distributions of sums of independent random variables. Teor. Veroâtn. Primen. 10(4), 645–659 (1965)

    Google Scholar 

  17. Ushakov, N.G.: Lower and upper bounds for characteristic functions. J. Math. Sci. 84, 1179–1189 (1997)

    MathSciNet  MATH  Google Scholar 

  18. Ushakov, N.G.: Selected Topics in Characteristic Functions. VSP, Utrecht (1999)

    Book  MATH  Google Scholar 

  19. Benedicks, M.: An estimate of the modulus of the characteristic function of a lattice distribution with application to remainder term estimates in local limit theorems. Ann. Probab. 3, 162–165 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhang, Z.: An upper bound for characteristic functions of lattice distributions with applications to survival probabilities of quantum states. J. Phys. A, Math. Theor. 40, 131–137 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang, Z.: Bounds for characteristic functions and Laplace transforms of probability distributions. Theory Probab. Appl. 56(2), 350–358 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  22. Neammanee, K.: A refinement of normal approximation to Poisson binomial. Int. J. Math. Math. Sci. 5, 717–728 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their valuable comments and suggestions.

Funding

This work was supported by the Development and Promotion of Science and Technology Talents Project (DPST).

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally in writing the final version of this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kritsana Neammanee.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Siripraparat, T., Neammanee, K. An improvement of convergence rate in the local limit theorem for integral-valued random variables. J Inequal Appl 2021, 57 (2021). https://doi.org/10.1186/s13660-021-02590-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02590-2

MSC

Keywords