Skip to main content

Local limit theorems without assuming finite third moment

Abstract

One of the most fundamental probabilities is the probability at a particular point. The local limit theorem is the well-known theorem that estimates this probability. In this paper, we estimate this probability by the density function of normal distribution in the case of lattice integer-valued random variables. Our technique is the characteristic function method. We complete to relax the third moment condition of Siripraparat and Neammanee (J. Inequal. Appl. 2021:57, 2021) and the references therein and also obtain explicit constants of the error bound.

1 Introduction and main results

Let X be an integer-valued random variable. One of the most fundamental probabilities is the probability at a particular point, i.e., \(P(X = k)\) for some \(k\in \mathbb{Z}\). The local limit theorem is one of the theorems that estimate this probability and describe how \(P(X=k)\) approaches the normal density, \(\frac {1}{\sigma \sqrt{2\pi}}e^{-\frac{(k-\mu )^{2}}{2\sigma ^{2}}}\) where μ and \(\sigma ^{2}\) are mean and variance of X, respectively. There are two well-known techniques for deriving this theorem: the method of characteristic function and the Bernoulli part extraction method. The characteristic function method is to estimate the bound for the characteristic function of a random variable. This method has been used in a number of studies such as in the case of bounded random variables (see [3–6], and [7] for examples) and in the case of lattice random variables (see [7–9], and [10] for examples).

Let \(X_{1},X_{2},\ldots , X_{n}\) be independent integer-valued random variables with mean \(\mu _{j}\) and variance \(\sigma _{j}^{2}\) for all \(j=1,2,\ldots ,n\). Then let

$$ S_{n} = \sum_{j=1}^{n}X_{j}, \qquad \mu = \sum_{j=1}^{n} \mu _{j}, \qquad \sigma ^{2} = \sum _{j=1}^{n}\sigma _{j}^{2}. $$

If \(P(X_{j}=1)=p_{j}=1-P(X_{j}=0)\), then \(X_{j}\) is called a Bernoulli random variable with parameter \(p_{j}\) and \(S_{n}\) is said to be a Poisson binomial random variable. In addition, when we provide \(p_{1} = p_{2} = \cdots = p_{n} = p \), we call \(S_{n}\) a binomial random variable with parameters n and p and use the notation \(S_{n}\sim B(n,p)\). The first local limit theorem was proved by De Moivre and Laplace ([11], 1754) for a binomial random variable. We call X a lattice random variable with parameter \((a,d)\) if its values belong to \(\mathcal{L}(a,d) = \{ a + md : m \in \mathbb{Z} \}\), where a and \(d>0\) are integers. In addition, d is said to be maximal if there are no other numbers \(a'\) and \(d' > d\) for which \(P(X\in \mathcal{L}(a',d')) = 1\), and we call X a maximal lattice random variable with parameter \((a,d)\). Observe that the Bernoulli random variable is a maximal lattice random variable with parameter \((0,1)\). In the case that \(X_{j}\)s are common lattice \(\mathcal{L}(a,d)\) and identically distributed, Ibragimov and Linnik [12] gave the rate of convergence \(O ( \frac {1}{n^{\frac{1}{2}+\alpha}} )\), where \(0 < \alpha < \frac{1}{2}\) in 1971. For further information, they showed that if d is maximal and

$$ \int _{|x|\geq u} x^{2}F(dx) = O \biggl(\frac{1}{u^{2\alpha}} \biggr)\quad \text{as } u \to \infty , $$

where F the distribution function of \(X_{1}\), then

$$ \sup_{k\in \mathbb{Z}} \biggl\vert \sqrt{n}P(S_{n}=na+kd) - \frac{d}{\sqrt{2\pi}\sigma _{1}}e^{- \frac{(na+kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert = O \biggl( \frac{1}{n^{\alpha}} \biggr), \quad 0 < \alpha < \frac{1}{2}. $$
(1.1)

A few years later, Petrov [13] proved that if \(E|X_{1}|^{3} < \infty \), then (1.1) holds with \(\alpha = \frac{1}{2}\). Moreover, for the case that \(X_{j}\)s are nonidentically distributed lattice random variables with parameter \((0,1)\) that satisfy the third moment condition and some properties, Petrov [13] gave the following result:

$$\begin{aligned} \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n}=k) - \frac{1}{\sigma \sqrt{2\pi}}e^{-\frac{(k-\mu )^{2}}{2\sigma ^{2}}} \biggr\vert \leq \frac{C}{\sigma ^{2}}. \end{aligned}$$

The previous studies had not given explicit constants of the error bound until Korolev and Zhukov ([14], 2000). In 2017, Giuliano and Weber [15] used the Bernoulli part extraction method to give an error bound with explicit constants in the case of nonidentically distributed square integrable random variables taking values in a common lattice \(\mathcal{L}(a,d)\). By assuming finite third moment, Siripraparat and Neammanee [1] used the characteristic function technique to illustrate the rate of convergence to \(O ( \frac {1}{\sigma ^{2}} )\) in 2021. For a special case, one can see [2] and [16] in the case of Poisson binomial and binomial, respectively.

In this paper, we relax the third moment condition to find the local limit theorems for sums of independent lattice integer-valued random variables and also give explicit constants of the error bound. Our technique is the characteristic function method inspired by Petrov [13] and Siripraparat and Neammanee [1]. Throughout this paper, let \(X_{1},X_{2},\ldots ,X_{n}\) be independent common lattice random variables with parameter \((a,d)\) such that \(E|X_{j}|^{2+\alpha} < \infty \), where \(0<\alpha < 1\), for \(j=1,2,\ldots ,n\), and let \(S_{n} = \sum_{j=1}^{n}X_{j}\) with mean μ and variance \(\sigma ^{2}\). The following are our main results.

Theorem 1.1

Let \(\beta = \sum_{j=1}^{n}\beta _{j}\), where \(\beta _{j} = 2\sum_{m=-\infty}^{\infty} p_{jm}p_{j(m+1)}\) and \(p_{jm} = P(X_{j}= a+md)\). If \(\beta >0\) and \(\sigma ^{2}>d^{2}\), then

$$ \begin{aligned} &\sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma \sqrt{2\pi}}e^{- \frac{(na +kd -\mu )^{2}}{2\sigma ^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{ (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}} (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{3-\alpha}{2}}}{\sigma ^{4}} \\ &\quad \quad {}+ \frac{0.3184}{\sigma ^{2}\tau}e^{-\frac{\sigma ^{2}\tau ^{2}}{2}} + \frac{1.5708}{\tau \beta} e^{-\frac{\tau ^{2}\beta}{\pi ^{2}}}, \end{aligned} $$

where \(\tau = \frac{d}{3^{\frac{1}{\alpha}} (\sum_{j=1}^{n}E|X_{j}-a|^{2+\alpha} )^{\frac{1}{2+\alpha}}}\).

Furthermore, if \(X_{1},X_{2},\ldots ,X_{n}\) are identically distributed and \(\beta _{1} >0\), then

$$\begin{aligned} \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma _{1} \sqrt{2n\pi}}e^{- \frac{(na + kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert \leq \frac{C_{1}}{n^{\frac{1+\alpha}{2}}}, \end{aligned}$$

where

$$\begin{aligned} C_{1} & = \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3-\alpha}{2}}}{\sigma _{1}^{4}} \\ & \quad {}+ \frac{0.6368\cdot 3^{\frac{3}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{\sigma _{1}^{4}} + \frac{15.5032\cdot 3^{\frac{3}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{d^{3}\beta _{1}^{2} }. \end{aligned}$$

We note that \(\beta _{j} > 0\) if \(X_{j}\) is a maximal lattice random variable. So, we can apply this result when d is maximal.

Theorem 1.2

Let \(\upsilon := \min_{1\leq j \leq n}\upsilon _{j}\), where \(\upsilon _{j} = 2\sum_{m=-\infty}^{\infty}p_{jm}p_{j(m+j)}\). If \(\upsilon _{j} > 0\) for all \(j=1,2,\ldots ,n\) and \(\sigma ^{2}>d^{2}\), then

$$ \begin{aligned} &\sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma \sqrt{2\pi}}e^{- \frac{(na +kd -\mu )^{2}}{2\sigma ^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{ (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}} (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{3-\alpha}{2}}}{\sigma ^{4}} \\ &\quad \quad {}+ \frac{0.3184}{\sigma ^{2}\tau}e^{-\frac{\sigma ^{2}\tau ^{2}}{2}} + \exp \biggl(-\frac{n\upsilon}{4} \min \biggl( 1, \biggl( \frac{n\tau}{2\pi} \biggr)^{2} \biggr) \biggr), \end{aligned} $$

where \(\tau = \frac{d}{3^{\frac{1}{\alpha}} (\sum_{j=1}^{n}E|X_{j}-a|^{2+\alpha} )^{\frac{1}{2+\alpha}}}\).

Furthermore, if \(X_{1},X_{2},\ldots ,X_{n}\) are identically distributed and \(\upsilon _{j} > 0\) for all \(j=1,2,\ldots ,n\), then for \(n\geq ( \frac{2\pi \cdot 3^{\frac{1}{\alpha}}(E|X_{1}-a|^{2+\alpha})^{\frac{1}{2+\alpha}}}{d} )^{\frac{2+\alpha}{1+\alpha}}\),

$$ \begin{aligned} \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma _{1} \sqrt{2n\pi}}e^{- \frac{(na + kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert \leq \frac{C_{2}}{n^{\frac{1+\alpha}{2}}} + e^{- \frac{n\upsilon}{4}}, \end{aligned} $$

where

$$\begin{aligned} C_{2} & = \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3-\alpha}{2}}}{\sigma _{1}^{4}} \\ & \quad {}+ \frac{0.6368\cdot 3^{\frac{3}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{\sigma _{1}^{4}}. \end{aligned}$$

We organize this paper as follows: First, we give auxiliary results in Sect. 2 that will be used to prove the main theorems in Sect. 3. Finally, we give some examples in Sect. 4.

2 Auxiliary results

In the following lemmas, we use an idea from [17] to give bounds of a characteristic function to prove Theorem 1.1.

Lemma 2.1

Let X be any integer-valued random variable with mean \(\mu _{X}\), variance \(\sigma _{X}^{2}\), and characteristic function \(\psi _{X}\). If \(E|X|^{2+\alpha} < \infty \) for some \(0 < \alpha < 1\), then there exists a function \(g_{X}\) such that, for all \(|t|\leq (\frac{1}{3E|X|^{\alpha}} )^{\frac{1}{\alpha}}\),

\((i)\):

\(|\psi _{X}(t)| \geq \frac{1}{3}\) and

\((\mathit{ii})\):

\(\psi _{X}(t) = \exp \{i\mu _{X} t - \frac{1}{2}\sigma _{X}^{2}t^{2} + \int _{0}^{t}\frac{g_{X}(s)}{\psi _{X}(s)}\,\mathrm{d}s \}\) and \(\int _{0}^{t} | \frac{g_{X}(s)}{\psi _{X}(s)} |\,\mathrm{d}s \leq 9E|X|^{2+\alpha}|t|^{2+\alpha}\).

Proof

\((i)\) Using the fact that for \(x\in \mathbb{R}\), \(e^{ix} = 1 + 2^{1-\alpha}|x|^{\alpha}\Theta \) for some complex function Θ such that \(|\Theta | \leq 1\) ([17], p. 359), we get that

$$\begin{aligned} Ee^{itX} = E\bigl(1+\Theta _{1}2^{1-\alpha} \vert tX \vert ^{\alpha}\bigr) = 1 + 2^{1- \alpha}E\bigl(\Theta _{1} \vert X \vert ^{\alpha}\bigr) \vert t \vert ^{\alpha}, \end{aligned}$$
(2.1)

where \(\Theta _{1}\) is a complex random variable such that \(|\Theta _{1}| \leq 1\). From this fact and the inequality \(|z_{1} +z_{2}| \geq |z_{1}|-|z_{2}|\) for complex numbers \(z_{1}\) and \(z_{2}\), we can see that

$$\begin{aligned} \bigl\vert Ee^{itX} \bigr\vert & = \bigl\vert 1 + 2^{1-\alpha}E\bigl(\Theta _{1} \vert X \vert ^{\alpha} \bigr) \vert t \vert ^{ \alpha } \bigr\vert \\ & \geq 1 - 2^{1-\alpha}E\bigl( \vert \Theta _{1} \vert \vert X \vert ^{\alpha}\bigr) \vert t \vert ^{\alpha } \\ & \geq 1 - 2^{1-\alpha}E \vert X \vert ^{\alpha} \vert t \vert ^{\alpha } \\ & \geq 1 - 2E \vert X \vert ^{\alpha} \vert t \vert ^{\alpha}. \end{aligned}$$

Then, for all \(|t|\leq (\frac{1}{3E|X|^{\alpha}} )^{\frac{1}{\alpha}} \), we have \(|\psi _{X}(t)| = |Ee^{itX}| \geq \frac{1}{3}\).

\((\mathit{ii})\) Let \(t\in \mathbb{R}\) be such that \(|t| \leq (\frac{1}{3E|X|^{\alpha}} )^{\frac{1}{\alpha}}\). Since \(\psi _{X}(t) = Ee^{itX}\), we obtain \(\psi _{X}'(t) = iE(Xe^{itX}) \), which implies that

$$\begin{aligned} \psi _{X}'(t) & = \biggl(\bigl(i\mu _{X}- \sigma _{X}^{2}t\bigr) + \frac{g_{X}(t)}{\psi _{X}(t)}\biggr)\psi _{X}(t), \end{aligned}$$

where

$$\begin{aligned} g_{X}(t) & = - \bigl(i\mu _{X}-\sigma _{X}^{2}t\bigr)Ee^{itX} + iE\bigl(Xe^{itX} \bigr). \end{aligned}$$

Hence

$$\begin{aligned} \frac{\psi _{X}'(t)}{\psi _{X}(t)} & = i\mu _{X}-\sigma _{X}^{2}t + \frac{g_{X}(t)}{\psi _{X}(t)} \end{aligned}$$

and then

$$\begin{aligned} \ln \psi _{X}(t) = \int _{0}^{t} \frac{\psi _{X}'(s)}{\psi _{X}(s)} \,\mathrm{d}s = i \mu _{X} t - \frac{1}{2}\sigma _{X}^{2}t^{2} + \int _{0}^{t} \frac{g_{X}(s)}{\psi _{X}(s)}\,\mathrm{d}s, \end{aligned}$$

that is,

$$\begin{aligned} \psi _{X}(t) = \exp \biggl\{ i\mu _{X} t - \frac{1}{2}\sigma _{X}^{2}t^{2} + \int _{0}^{t}\frac{g_{X}(s)}{\psi _{X}(s)}\,\mathrm{d}s \biggr\} . \end{aligned}$$

From the fact that for \(x\in \mathbb{R}\), \(e^{ix} = 1 + ix + \frac{2^{1-\alpha}}{1+\alpha}|x|^{1+\alpha}\Theta \) for some complex function Θ such that \(|\Theta | \leq 1\) ([17], p. 359), we have that

$$\begin{aligned} &Ee^{itX} = 1 + itEX + \frac{2^{1-\alpha}}{1+\alpha}E\bigl(\Theta _{2} \vert X \vert ^{1+ \alpha}\bigr) \vert t \vert ^{1+\alpha} \end{aligned}$$
(2.2)
$$\begin{aligned} &\text{and }\quad iE\bigl(Xe^{itX}\bigr) = i\mu _{X} - tEX^{2} + \frac{2^{1-\alpha}}{1+\alpha}E\bigl(i\Theta _{2} \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+ \alpha}, \end{aligned}$$
(2.3)

where \(\Theta _{2}\) is a complex random variable such that \(|\Theta _{2}| \leq 1\). From (2.1)–(2.3), we have

$$\begin{aligned} g_{X}(t) & = -i\mu _{X}Ee^{itX} + \sigma _{X}^{2}tEe^{itX} + iE\bigl(Xe^{itX} \bigr) \\ &= \frac{2^{1-\alpha}}{1+\alpha}\mu _{X} E\bigl(i\Theta _{2} \vert X \vert ^{1+\alpha}\bigr) \vert t \vert ^{1+ \alpha} + 2^{1-\alpha} \sigma _{X}^{2}E\bigl(\Theta _{1} \vert X \vert ^{\alpha}\bigr) \vert t \vert ^{1+ \alpha} \\ & \quad {}+ \frac{2^{1-\alpha}}{1+\alpha}E\bigl(i\Theta _{2} \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+ \alpha}. \end{aligned}$$
(2.4)

According to Lyapunov’s inequality: \((E|X|^{r})^{\frac{1}{r}} \leq (E|X|^{s})^{\frac{1}{s}}\), where \(0 < r \leq s\), we have that \(E|X| \leq (E|X|^{2+\alpha})^{\frac{1}{2+\alpha}}\) and \(E|X|^{1+\alpha} \leq (E|X|^{2+\alpha})^{\frac{1+\alpha}{2+\alpha}}\), which imply that

$$ \mu _{X}E \vert X \vert \leq E \vert X \vert E \vert X \vert ^{1+\alpha} \leq E \vert X \vert ^{2+\alpha}. $$

We can use the same technique to show that

$$\begin{aligned} \sigma _{X}^{2}E \vert X \vert ^{\alpha }& \leq EX^{2}E \vert X \vert ^{\alpha }\leq E \vert X \vert ^{2+ \alpha}. \end{aligned}$$

From these facts and (2.4), we have

$$\begin{aligned} \bigl\vert g_{X}(t) \bigr\vert & \leq \frac{2^{1-\alpha}}{1+\alpha}\mu _{X} E\bigl( \vert i \Theta _{2} \vert \vert X \vert ^{1+\alpha}\bigr) \vert t \vert ^{1+\alpha} + 2^{1-\alpha} \sigma _{X}^{2}E\bigl( \vert \Theta _{1} \vert \vert X \vert ^{\alpha}\bigr) \vert t \vert ^{1+\alpha} \\ & \quad {}+ \frac{2^{1-\alpha}}{1+\alpha}E\bigl( \vert i\Theta _{2} \vert \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+ \alpha} \\ & \leq \frac{2^{1-\alpha}}{1+\alpha}\mu _{X} E\bigl( \vert X \vert ^{1+\alpha}\bigr) \vert t \vert ^{1+ \alpha} + 2^{1-\alpha} \sigma _{X}^{2}E\bigl( \vert X \vert ^{\alpha}\bigr) \vert t \vert ^{1+\alpha} \\ & \quad {}+ \frac{2^{1-\alpha}}{1+\alpha}E\bigl( \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+\alpha} \\ & \leq \frac{2^{1-\alpha}}{1+\alpha} E\bigl( \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+\alpha} + 2^{1-\alpha} E\bigl( \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+\alpha} \\ & \quad {}+ \frac{2^{1-\alpha}}{1+\alpha}E\bigl( \vert X \vert ^{2+\alpha}\bigr) \vert t \vert ^{1+\alpha} \\ & = \biggl( 2^{1-\alpha} + \frac{2^{2-\alpha}}{1+\alpha} \biggr)E \vert X \vert ^{2+ \alpha} \vert t \vert ^{1+\alpha} \\ & \leq 6E \vert X \vert ^{2+\alpha} \vert t \vert ^{1+\alpha}. \end{aligned}$$
(2.5)

Hence we can conclude from \((i)\) and (2.5) that for all \(|t|\leq (\frac{1}{3E|X|^{\alpha}} )^{\frac{1}{\alpha}} \) we have

$$\begin{aligned} \biggl\vert \frac{g_{X}(t)}{\psi _{X}(t)} \biggr\vert &\leq 18E \vert X \vert ^{2+\alpha} \vert t \vert ^{1+ \alpha}, \end{aligned}$$

which implies that

$$\begin{aligned} \int _{0}^{t} \biggl\vert \frac{g_{X}(s)}{\psi _{X}(s)} \biggr\vert \,\mathrm{d}s &\leq \frac{18}{2+\alpha}E \vert X \vert ^{2+\alpha} \vert t \vert ^{2+\alpha} \leq 9E \vert X \vert ^{2+\alpha} \vert t \vert ^{2+\alpha}. \end{aligned}$$

 □

Lemma 2.2

Let \(\tau = \frac{1}{3^{\frac{1}{\alpha}}} ( \frac{1}{\sum_{j=1}^{n}E|X_{j}|^{2+\alpha}} )^{ \frac{1}{2+\alpha}}\). Then

$$ \bigl\vert \psi (t) - e^{it\mu -\frac{1}{2}\sigma ^{2}t^{2}} \bigr\vert \leq 12.5606 \sum _{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha} \vert t \vert ^{2+\alpha}e^{-\frac{1}{2} \sigma ^{2}t^{2}} $$

for all \(|t|\leq \tau \).

Proof

From Lyapunov’s inequality, we have

$$\begin{aligned} \bigl(E \vert X_{l} \vert ^{\alpha}\bigr)^{\frac{1}{\alpha}} \leq \bigl(E \vert X_{l} \vert ^{2+\alpha} \bigr)^{ \frac{1}{2+\alpha}}, \end{aligned}$$

which implies that

$$\begin{aligned} \biggl(\frac{1}{\sum_{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha}} \biggr)^{ \frac{1}{2+\alpha}} \leq \biggl(\frac{1}{E \vert X_{l} \vert ^{2+\alpha}} \biggr)^{ \frac{1}{2+\alpha}} \leq \biggl(\frac{1}{E \vert X_{l} \vert ^{\alpha}} \biggr)^{ \frac{1}{\alpha}} \end{aligned}$$

for all \(l=1,2,\ldots ,n\). This provides that

$$ \biggl( \frac{1}{3^{\frac{2+\alpha}{\alpha}}\sum_{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha}} \biggr)^{\frac{1}{2+\alpha}} \leq \biggl(\frac{1}{3E \vert X_{l} \vert ^{\alpha}} \biggr)^{\frac{1}{\alpha}} $$

for all \(l=1,2,\ldots ,n\). From this fact and Lemma 2.1, we have for all \(|t| \leq \tau \),

$$\begin{aligned} \psi (t) = \exp \Biggl\{ i\mu t - \frac{1}{2}\sigma ^{2}t^{2} + \sum_{j=1}^{n} G_{j}(t) \Biggr\} , \end{aligned}$$
(2.6)

where

$$\begin{aligned} G_{j}(t) = \int _{0}^{t}\frac{g_{X_{j}}(s)}{\psi _{X_{j}}(s)} \,\mathrm{d}s \quad \text{and}\quad \bigl\vert G_{j}(t) \bigr\vert \leq 9E \vert X_{j} \vert ^{2+\alpha} \vert t \vert ^{2+ \alpha}. \end{aligned}$$

From (2.6) and the inequality \(|e^{z} - 1| \leq |z|e^{|z|}\) for a complex number z, we get that for all \(|t|\leq \tau \),

$$\begin{aligned} \bigl\vert \psi (t) - e^{i\mu t - \frac{1}{2}\sigma ^{2}t^{2}} \bigr\vert & = \bigl\vert e^{ i\mu t - \frac{1}{2}\sigma ^{2}t^{2} + \sum _{j=1}^{n} G_{j}(t)} - e^{i\mu t - \frac{1}{2}\sigma ^{2}t^{2}} \bigr\vert \\ & = \bigl\vert e^{i\mu t - \frac{1}{2}\sigma ^{2}t^{2}} \bigr\vert \bigl\vert e^{ \sum _{j=1}^{n} G_{j}(t)} - 1 \bigr\vert \\ & \leq \Biggl\vert \sum_{j=1}^{n} G_{j}(t) \Biggr\vert e^{- \frac{1}{2} \sigma ^{2}t^{2} + \vert \sum _{j=1}^{n} G_{j}(t) \vert } \\ & \leq \sum_{j=1}^{n} \bigl\vert G_{j}(t) \bigr\vert e^{- \frac{1}{2} \sigma ^{2}t^{2} + \sum _{j=1}^{n} \vert G_{j}(t) \vert } \\ & \leq 9\sum_{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha} \vert t \vert ^{2+\alpha}\times \exp \Biggl\{ -\frac{1}{2}\sigma ^{2}t^{2} + 9\sum _{j=1}^{n}E \vert X_{j} \vert ^{2+ \alpha} \vert t \vert ^{2+\alpha} \Biggr\} \\ & \leq 9\sum_{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha} \vert t \vert ^{2+\alpha}\times \exp \biggl\{ -\frac{1}{2}\sigma ^{2}t^{2} + \frac{9}{3^{\frac{2+\alpha}{\alpha}}} \biggr\} \\ & \leq 9\sum_{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha} \vert t \vert ^{2+\alpha}\times \exp \biggl\{ -\frac{1}{2}\sigma ^{2}t^{2} + \frac{9}{3^{3}} \biggr\} \\ & \leq 12.5606\sum_{j=1}^{n}E \vert X_{j} \vert ^{2+\alpha} \vert t \vert ^{2+\alpha}e^{ - \frac{1}{2}\sigma ^{2}t^{2}}. \end{aligned}$$

 □

3 Proof of the main results

3.1 Proof of Theorem 1.1

Proof

First, we will prove the theorem in the case of \(a=0\) and \(d=1\). Let \(Y_{1},Y_{2},\ldots ,Y_{n}\) be independent common lattice random variables with parameter \((0,1)\), and let

$$ W_{n} = Y_{1} + Y_{2} + \cdots + Y_{n} $$

with \(E(W_{n})=\mu _{W}\), \(Var(W_{n})=\sigma _{W}^{2}\) and the characteristic function \(\psi _{W}\). Suppose that \(\beta _{Y_{j}} = 2\sum_{m=-\infty}^{\infty}P(Y_{j} = m)P(Y_{j} =m+1) > 0\) for all \(j=1,2,\ldots ,n\), and let \(\tau = \frac{1}{3^{\frac{1}{\alpha}}} ( \frac{1}{\sum_{j=1}^{n}E|Y_{j}|^{2+\alpha}} )^{ \frac{1}{2+\alpha}}\). Since \(P(W_{n} = k) = \frac{1}{2\pi}\int _{-\pi}^{\pi} e^{-ikt} \psi _{W}(t) \,\mathrm{d}t\) ([18], p. 511), we have

$$\begin{aligned} & \biggl\vert P(W_{n} = k) - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert \\ &\quad = \biggl\vert \frac{1}{2\pi} \int _{-\pi}^{\pi} e^{-ikt}\psi _{ W}(t) \,\mathrm{d}t - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert \\ &\quad \leq \frac{1}{2\pi} \biggl\vert \int _{ \vert t \vert < \tau} e^{-ikt}\psi _{ W}(t) \,\mathrm{d}t - \int _{ \vert t \vert < \tau}e^{it(\mu _{ W} -k )-\frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t \biggr\vert \\ &\quad \quad {}+ \frac{1}{2\pi} \biggl\vert \int _{ \vert t \vert < \tau}e^{it(\mu _{ W}-k)-\frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t - \frac{\sqrt{2\pi}}{\sigma _{W}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert \\ &\quad \quad {}+ \frac{1}{2\pi} \biggl\vert \int _{\tau \leq \vert t \vert \leq \pi} e^{-ikt} \psi _{W}(t) \,\mathrm{d}t \biggr\vert \\ &\quad := \vert A \vert + \vert B \vert + \vert C \vert . \end{aligned}$$
(3.1)

From Lemma 2.2, we have

$$\begin{aligned} \vert A \vert & \leq \frac{1}{2\pi} \int _{ \vert t \vert < \tau} \bigl\vert e^{-ikt} \bigr\vert \bigl\vert \psi _{ W}(t) - e^{it\mu _{W} - \frac{1}{2} \sigma _{W}^{2}t^{2}} \bigr\vert \,\mathrm{d}t \\ & = \frac{1}{2\pi} \int _{ \vert t \vert < \tau} \bigl\vert \psi _{W}(t) - e^{it\mu _{W} - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \bigr\vert \,\mathrm{d}t \\ & \leq \frac{12.5606}{\pi}\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} \int _{0}^{\tau} \vert t \vert ^{2+\alpha}e^{ - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t. \end{aligned}$$

To bound \(\int _{0}^{\tau} |t|^{2+\alpha}e^{ - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t\), we let \(\tilde{\tau} = \tau ^{\frac{2+\alpha}{2}}\) and note that

$$\begin{aligned} \int _{0}^{\tilde{\tau}} t^{2+\alpha}e^{ - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t & \leq \int _{0}^{\tilde{\tau}} t^{2+\alpha} \,\mathrm{d}t \\ & = \frac{\tau ^{\frac{(2+\alpha )(3+\alpha )}{2}}}{3+\alpha} \\ & \leq \frac{1}{3^{\frac{\alpha ^{2} + 7\alpha +6}{2\alpha}} (\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} )^{\frac{3+\alpha}{2}}} \\ & \leq \frac{0.0005}{ (\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} )^{\frac{3+\alpha}{2}}} \end{aligned}$$

and

$$\begin{aligned} \int _{\tilde{\tau}}^{\tau} t^{2+\alpha}e^{ - \frac{1}{2} \sigma _{W}^{2}t^{2}} \,\mathrm{d}t & = \int _{\tilde{\tau}}^{\tau} \frac{t^{3}}{t^{1-\alpha}}e^{ - \frac{1}{2}\sigma _{W}^{2}t^{2}} \,\mathrm{d}t \\ & \leq \frac{1}{\tau ^{\frac{(2+\alpha )(1-\alpha )}{2}}} \int _{\tilde{\tau}}^{\tau} t^{3} e^{ - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t \\ & \leq \frac{1}{\tau ^{\frac{(2+\alpha )(1-\alpha )}{2}}} \int _{0}^{\infty} t^{3} e^{ - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t \\ & = 3^{\frac{(2+\alpha )(1-\alpha )}{2\alpha}} \Biggl(\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+ \alpha} \Biggr)^{\frac{1-\alpha}{2}} \biggl( \frac{2}{\sigma _{W}^{4}} \biggr) \\ & \leq \frac{1.1548\cdot 3^{\frac{1}{\alpha}}}{\sigma _{W}^{4}} \Biggl(\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} \Biggr)^{ \frac{1-\alpha}{2}}. \end{aligned}$$

Hence,

$$\begin{aligned} \vert A \vert & \leq \frac{12.5606}{\pi}\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+ \alpha} \biggl( \int _{0}^{\tilde{\tau}} t^{2+\alpha}e^{ - \frac{1}{2}\sigma _{W}^{2}t^{2}} \,\mathrm{d}t + \int _{\tilde{\tau}}^{\tau} t^{2+\alpha}e^{ - \frac{1}{2}\sigma _{W}^{2}t^{2}} \,\mathrm{d}t \biggr) \\ & \leq \frac{0.0020}{ (\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} )^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}}{\sigma _{W}^{4}} \Biggl(\sum _{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} \Biggr)^{ \frac{3-\alpha}{2}}. \end{aligned}$$
(3.1)

By the fact that

$$\begin{aligned} \int _{|t| < \tau}e^{it(\mu _{W} -k ) - \frac{1}{2}\sigma _{W}^{2}t^{2}} \,\mathrm{d}t & = \int _{\mathbb{R}}e^{it(\mu _{ W} -k ) - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t - \int _{|t| \geq \tau}e^{it(\mu _{W} -k ) - \frac{1}{2}\sigma _{W}^{2}t^{2}} \,\mathrm{d}t \\ & = \frac{1}{\sigma _{W}} \int _{ \mathbb{R}}e^{ \frac{it(\mu _{W} -k )}{\sigma _{W}} - \frac{t^{2}}{2}} \,\mathrm{d}t - \frac{1}{\sigma _{W}} \int _{|t| \geq \sigma _{W}\tau}e^{ \frac{it(\mu _{W} -k )}{\sigma _{W}} - \frac{t^{2}}{2}} \,\mathrm{d}t \\ & = \frac{\sqrt{2\pi}}{\sigma _{W}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} - \frac{1}{\sigma _{W}} \int _{|t| \geq \sigma _{W}\tau}e^{ \frac{it(\mu _{W} -k )}{\sigma _{W}} - \frac{t^{2}}{2}} \,\mathrm{d}t, \end{aligned}$$

we have

$$\begin{aligned} B & = \frac{1}{2\pi} \int _{|t| < \tau}e^{it(\mu _{ W} -k ) - \frac{1}{2}\sigma _{ W}^{2}t^{2}} \,\mathrm{d}t - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \\ & = -\frac{1}{2\pi \sigma _{W}} \int _{|t| \geq \sigma _{W}\tau}e^{ \frac{it(\mu _{W} -k )}{\sigma _{W}} - \frac{t^{2}}{2}} \,\mathrm{d}t, \end{aligned}$$

and hence,

$$\begin{aligned} \vert B \vert & \leq \frac{1}{2\pi \sigma _{W}} \int _{ \vert t \vert \geq \sigma _{W} \tau}e^{- \frac{t^{2}}{2}} \,\mathrm{d}t \\ & \leq \frac{1}{\pi \sigma _{W}^{2}\tau} \int _{\sigma _{W}\tau}^{\infty} te^{- \frac{t^{2}}{2}} \,\mathrm{d}t \\ & = \frac{0.3184}{\sigma _{W}^{2}\tau}e^{ \frac{-\sigma _{W}^{2}\tau ^{2}}{2}}. \end{aligned}$$
(3.2)

Using the fact that \(|\psi _{W}(t)| \leq e^{-\frac{1}{\pi ^{2}}\beta _{ W}t^{2}}\), where \(\beta _{W} = \sum_{j=1}^{n}\beta _{ Y_{j}}\), for \(t\in [0,\pi )\) ([1], p. 5), we have

$$\begin{aligned} \vert C \vert & = \biggl\vert \frac{1}{2\pi} \int _{\tau \leq \vert t \vert \leq \pi} e^{-ikt} \psi _{W}(t) \,\mathrm{d}t \biggr\vert \\ & \leq \frac{1}{2\pi} \int _{\tau \leq \vert t \vert \leq \pi} \bigl\vert \psi _{ W}(t) \bigr\vert \,\mathrm{d}t \\ & = \frac{1}{\pi} \int _{\tau}^{\pi} \bigl\vert \psi _{W}(t) \bigr\vert \,\mathrm{d}t \\ & \leq \frac{1}{\pi} \int _{\tau}^{\pi} e^{- \frac{1}{\pi ^{2}}\beta _{W} t^{2}} \,\mathrm{d}t \\ & \leq \frac{1}{\pi \tau} \int _{\tau}^{\infty} te^{- \frac{1}{\pi ^{2}}\beta _{W} t^{2}} \,\mathrm{d}t \\ & = \frac{1}{\pi \tau} \biggl( \frac{\pi ^{2}e^{-\frac{\tau ^{2}\beta _{W}}{\pi ^{2}}}}{2\beta _{W}} \biggr) \\ & \leq \frac{1.5708}{\tau \beta _{W}} e^{- \frac{\tau ^{2}\beta _{W}}{\pi ^{2}}}. \end{aligned}$$
(3.3)

From (3.1)–(3.3), we have

$$\begin{aligned} & \biggl\vert P(W_{n} = k) - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020}{ (\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} )^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}}{\sigma _{W}^{4}} \Biggl(\sum _{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} \Biggr)^{ \frac{3-\alpha}{2}} \\ &\quad \quad {}+ \frac{0.3184}{\sigma _{W}^{2}\tau}e^{ \frac{-\sigma _{W}^{2}\tau ^{2}}{2}} + \frac{1.5708}{\tau \beta _{W}}e^{- \frac{\tau ^{2}\beta _{W}}{\pi ^{2}}}. \end{aligned}$$
(3.4)

In general, let \(X_{1},X_{2},\ldots ,X_{n}\) be independent lattice random variables with parameter \((a,d)\). For \(j=1,2,\ldots ,n\), let \(Y_{j} = \frac{X_{j}-a}{d}\) and \(W_{n} = Y_{1} + Y_{2} + \cdots + Y_{n}\). Observe that \(Y_{1},Y_{2},\ldots ,Y_{n}\) are independent common lattice random variables with parameter \((0,1)\) and

$$\begin{aligned} & \mu _{W} = \frac{\mu -na}{d},\qquad \sigma _{ W}^{2}= \frac{\sigma ^{2}}{d^{2}}, \qquad P(Y_{j} = m) = P(X_{j} = a+dm), \end{aligned}$$
(3.5)
$$\begin{aligned} & E \vert Y_{j} \vert ^{2+\alpha} = \frac{E \vert X_{j}-a \vert ^{2+\alpha}}{d^{2+\alpha}}, \qquad \tau = \frac{d}{3^{\frac{1}{\alpha}} (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{1}{2+\alpha}}}. \end{aligned}$$
(3.6)

From (3.4)–(3.6), we have

$$ \begin{aligned} & \biggl\vert P(W_{n} = k) - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{ (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}} (\sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} )^{\frac{3-\alpha}{2}}}{\sigma ^{4}} \\ &\quad \quad {}+ \frac{0.3184}{\sigma ^{2}\tau}e^{-\frac{\sigma ^{2}\tau ^{2}}{2}} + \frac{1.5708}{\tau \beta} e^{-\frac{\tau ^{2}\beta}{\pi ^{2}}}. \end{aligned} $$

From this fact and the fact that

$$\begin{aligned} & \biggl\vert P(S_{n} = na +kd) - \frac{d}{\sigma \sqrt{2\pi}}e^{- \frac{(na +kd -\mu )^{2}}{2\sigma ^{2}}} \biggr\vert = \biggl\vert P(W_{n} = k) - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k -\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert , \end{aligned}$$

we have the conclusion of the theorem.

Furthermore, if \(X_{1},X_{2},\ldots ,X_{n}\) are identically distributed, then

$$\begin{aligned} \mu = n\mu _{1},\qquad \sigma = \sigma _{1} \sqrt{n}, \qquad \sum_{j=1}^{n}E \vert X_{j}-a \vert ^{2+\alpha} = nE \vert X_{1}-a \vert ^{2+\alpha}, \quad \text{and}\quad \beta = n\beta _{1}, \end{aligned}$$

which imply that

$$\begin{aligned} &\sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma _{1} \sqrt{2n\pi}}e^{- \frac{(na + kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1+\alpha}{2}}n^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3-\alpha}{2}}}{\sigma _{1}^{4}n^{\frac{1+\alpha}{2}}} \\ &\quad \quad {}+ \frac{0.3184\cdot 3^{\frac{1}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{\sigma _{1}^{2}n^{\frac{1+\alpha}{2+\alpha}}} \exp \biggl( \frac{-\sigma _{1}^{2}n^{\frac{\alpha}{2+\alpha}}}{2\cdot 3^{\frac{2}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}} \biggr) \\ &\quad \quad {}+ \frac{1.5708\cdot 3^{\frac{1}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{d\beta _{1}n^{\frac{1+\alpha}{2+\alpha}}} \exp \biggl( \frac{-d^{2}\beta _{1}n^{\frac{\alpha}{2+\alpha}}}{3^{\frac{2}{\alpha}}\pi ^{2}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}} \biggr). \end{aligned}$$
(3.7)

Since \(\frac{1+2\alpha}{2+\alpha} \geq \frac{1+\alpha}{2}\) and \(e^{-x} \leq \frac{1}{x}\) for a real number \(x>0\), we obtain that

$$\begin{aligned} & \frac{0.3184\cdot 3^{\frac{1}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{\sigma _{1}^{2}n^{\frac{1+\alpha}{2+\alpha}}} \exp \biggl( \frac{-\sigma _{1}^{2}n^{\frac{\alpha}{2+\alpha}}}{2\cdot 3^{\frac{2}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}} \biggr) \\ &\quad \leq \frac{0.3184\cdot 3^{\frac{1}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{\sigma _{1}^{2}n^{\frac{1+\alpha}{2+\alpha}}} \biggl( \frac{2\cdot 3^{\frac{2}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}}{\sigma _{1}^{2}n^{\frac{\alpha}{2+\alpha}}} \biggr) \\ &\quad\leq \frac{0.6368\cdot 3^{\frac{3}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{\sigma _{1}^{4}n^{\frac{1+\alpha}{2}}} \end{aligned}$$
(3.8)

and

$$\begin{aligned} & \frac{1.5708\cdot 3^{\frac{1}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{d\beta _{1}n^{\frac{1+\alpha}{2+\alpha}}} \exp \biggl( \frac{-d^{2}\beta _{1}n^{\frac{\alpha}{2+\alpha}}}{3^{\frac{2}{\alpha}}\pi ^{2}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}} \biggr) \\ &\quad \leq \frac{1.5708\cdot 3^{\frac{1}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{d\beta _{1}n^{\frac{1+\alpha}{2+\alpha}}} \biggl( \frac{3^{\frac{2}{\alpha}}\pi ^{2}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}}{d^{2}\beta _{1}n^{\frac{\alpha}{2+\alpha}}} \biggr) \\ &\quad \leq \frac{15.5032\cdot 3^{\frac{3}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{d^{3}\beta _{1}^{2} n^{\frac{1+\alpha}{2}}}. \end{aligned}$$
(3.9)

From (3.7)–(3.9), we have

$$\begin{aligned} \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma _{1} \sqrt{2n\pi}}e^{- \frac{(na + kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert \leq \frac{C_{1}}{n^{\frac{1+\alpha}{2}}}, \end{aligned}$$

where

$$\begin{aligned} C_{1} & = \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3-\alpha}{2}}}{\sigma _{1}^{4}} \\ & \quad {}+ \frac{0.6368\cdot 3^{\frac{3}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{\sigma _{1}^{4}} + \frac{15.5032\cdot 3^{\frac{3}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{d^{3}\beta _{1}^{2} }. \end{aligned}$$

 □

3.2 Proof of Theorem 1.2

Proof

By the same reason of Theorem 1.1, it suffices to prove the theorem in case \(a=0\) and \(d=1\). Let \(Y_{1},Y_{2},\ldots ,Y_{n}\) be independent common lattice random variables with parameter \((0,1)\) with the characteristic functions \(\psi _{Y_{j}}\), and let

$$ W_{n} = Y_{1} + Y_{2} + \cdots + Y_{n} $$

with \(E(W_{n})=\mu _{W}\), \(Var(W_{n})=\sigma _{W}^{2}\) and the characteristic function \(\psi _{W}\). Suppose that \(\upsilon _{Y_{j}} = 2\sum_{m=-\infty}^{ \infty}P(Y_{j} = m)P(Y_{j} =m+j) > 0\) for all \(j=1,2\ldots ,n\). From (3.1)–(3.2) in Theorem 1.1, we have

| P ( W n = k ) − 1 σ W 2 π e − ( k − μ W ) 2 2 σ W 2 | ≤ 0.0020 ( ∑ j = 1 n E | Y j | 2 + α ) 1 + α 2 + 4.6171 ⋅ 3 1 α σ W 4 ( ∑ j = 1 n E | Y j | 2 + α ) 3 − α 2 + 0.3184 σ W 2 τ e − σ W 2 τ 2 2 + | C | .
(3.11)

Siripraparat and Neammanee ([1], p. 6) showed that

$$\begin{aligned} \ln \bigl( \bigl\vert \psi _{Y_{j}}(t) \bigr\vert \bigr) & \leq - \sum_{m=-\infty}^{ \infty}\sum _{l=-\infty}^{\infty}P(Y_{j}=m)P(Y_{j}=l) \sin ^{2} \biggl((m-l) \frac{t}{2} \biggr). \end{aligned}$$

From this fact and the fact that

$$ \sum_{j=1}^{n} \sin ^{2} \biggl( \frac{jt}{2} \biggr) \geq \frac{n}{4}\min \biggl( 1, \biggl( \frac{nt}{2\pi} \biggr)^{2} \biggr) $$

for \(|t|\leq \pi \) and \(n\geq 2\) ([19], p. 399), we have

$$\begin{aligned} \bigl\vert \psi _{W}(t) \bigr\vert & =\prod _{j=1}^{n} \bigl\vert \psi _{ Y_{j}}(t) \bigr\vert \\ & \leq \prod_{j=1}^{n}\exp \Biggl(-2\sum _{m=-\infty}^{ \infty}P(Y_{j} = m)P(Y_{j} =m+j) \sin ^{2} \biggl(\frac{jt}{2} \biggr) \Biggr) \\ & \leq \exp \Biggl(-\sum_{j=1}^{n}\upsilon _{ Y_{j}}\sin ^{2} \biggl(\frac{jt}{2} \biggr) \Biggr) \\ & \leq \exp \Biggl(-\upsilon _{W}\sum_{j=1}^{n} \sin ^{2} \biggl(\frac{jt}{2} \biggr) \Biggr) \\ & \leq \exp \biggl(-\frac{n\upsilon _{W}}{4}\min \biggl( 1, \biggl( \frac{nt}{2\pi} \biggr)^{2} \biggr) \biggr), \end{aligned}$$

where \(\upsilon _{W} = \min_{1\leq j \leq n} \upsilon _{Y_{j}}\). Hence,

$$\begin{aligned} \vert C \vert & \leq \frac{1}{2\pi} \int _{\tau \leq \vert t \vert \leq \pi} \bigl\vert \psi _{W}(t) \bigr\vert \,\mathrm{d}t \\ & = \frac{1}{\pi} \int _{\tau}^{\pi} \bigl\vert \psi _{W}(t) \bigr\vert \,\mathrm{d}t \\ & \leq \frac{1}{\pi} \int _{\tau}^{\pi}\exp \biggl(- \frac{n\upsilon _{W}}{4}\min \biggl( 1, \biggl( \frac{nt}{2\pi} \biggr)^{2} \biggr) \biggr) \,\mathrm{d}t \\ & \leq \frac{\pi -\tau}{\pi} \exp \biggl(- \frac{n\upsilon _{W}}{4}\min \biggl( 1, \biggl( \frac{n\tau}{2\pi} \biggr)^{2} \biggr) \biggr) \\ & \leq \exp \biggl(-\frac{n\upsilon _{W}}{4}\min \biggl( 1, \biggl( \frac{n\tau}{2\pi} \biggr)^{2} \biggr) \biggr). \end{aligned}$$
(3.10)

From (3.11) and (3.10),

$$ \begin{aligned} & \biggl\vert P(W_{n} = k) - \frac{1}{\sigma _{W} \sqrt{2\pi}}e^{- \frac{(k-\mu _{W})^{2}}{2\sigma _{W}^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020}{ (\sum_{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} )^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}}{\sigma _{W}^{4}} \Biggl(\sum _{j=1}^{n}E \vert Y_{j} \vert ^{2+\alpha} \Biggr)^{ \frac{3-\alpha}{2}} + \frac{0.3184}{\sigma _{W}^{2}\tau}e^{ \frac{-\sigma _{W}^{2}\tau ^{2}}{2}} \\ &\quad \quad {}+ e^{-\frac{n\upsilon _{W}}{4}\min ( 1, ( \frac{n\tau}{2\pi} )^{2} )}. \end{aligned} $$

Furthermore, if \(X_{1},X_{2},\ldots ,X_{n}\) are identically distributed and \(\upsilon _{j} > 0\) for all \(j=1,2,\ldots ,n\), then

$$ \begin{aligned} &\sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma _{1} \sqrt{2n\pi}}e^{- \frac{(na + kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert \\ &\quad \leq \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1+\alpha}{2}}n^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3-\alpha}{2}}}{\sigma _{1}^{4}n^{\frac{1+\alpha}{2}}} \\ &\quad \quad {}+ \frac{0.3184\cdot 3^{\frac{1}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}}{\sigma _{1}^{2}n^{\frac{1+\alpha}{2+\alpha}}} \exp \biggl( \frac{-\sigma _{1}^{2}n^{\frac{\alpha}{2+\alpha}}}{2\cdot 3^{\frac{2}{\alpha}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{2}{2+\alpha}}} \biggr) \\ &\quad \quad {}+ \exp \biggl(-\frac{n\upsilon}{4}\min \biggl( 1, \biggl( \frac{n^{\frac{1+\alpha}{2+\alpha}}d}{2\pi \cdot 3^{\frac{1}{\alpha}}( E \vert X_{j}-a \vert ^{2+\alpha})^{\frac{1}{2+\alpha}}} \biggr)^{2} \biggr) \biggr). \end{aligned} $$

From (3.8) and \(n\geq ( \frac{2\pi \cdot 3^{\frac{1}{\alpha}}(E|X_{1}-a|^{2+\alpha})^{\frac{1}{2+\alpha}}}{d} )^{\frac{2+\alpha}{1+\alpha}}\), we obtain that

$$ \begin{aligned} \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} = na + kd) - \frac{d}{\sigma _{1} \sqrt{2n\pi}}e^{- \frac{(na + kd-n\mu _{1})^{2}}{2n\sigma _{1}^{2}}} \biggr\vert \leq \frac{C_{2}}{n^{\frac{1+\alpha}{2}}} + e^{- \frac{n\upsilon}{4}}, \end{aligned} $$

where

$$\begin{aligned} C_{2} & = \frac{0.0020d^{\frac{(2+\alpha )(1+\alpha )}{2}}}{(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{1+\alpha}{2}}} + \frac{4.6171\cdot 3^{\frac{1}{\alpha}}d^{\frac{\alpha ^{2}-\alpha +2}{2}}(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3-\alpha}{2}}}{\sigma _{1}^{4}} \\ & \quad {}+ \frac{0.6368\cdot 3^{\frac{3}{\alpha}}\,d(E \vert X_{1}-a \vert ^{2+\alpha})^{\frac{3}{2+\alpha}}}{\sigma _{1}^{4}}. \end{aligned}$$

 □

4 Examples

In our work, we relax the condition third moment to almost the second moment. The following example shows that there is an integer-valued random variable where the third moment does not exist but the aim moment exists.

Example 4.1

For \(j=1,2,\ldots ,n\), let

$$ P(X_{j} = 0) = P(X_{j} = 2) = 0.45\quad \text{and}\quad P\bigl(X_{j}=2^{k}\bigr) = \frac{5.6}{2^{3k}}\quad \text{for integer }k\geq 2, $$

and assume that \(X_{1},X_{2},\ldots ,X_{n}\) are independent. Note that \(X_{1},X_{2},\ldots ,X_{n}\) are maximal lattice random variables with parameter \((0,2)\) and \(\mu _{j} = 1.3667\), \(\sigma _{j}^{2} = 2.7322\), \(\beta _{j}= 0.2025\),

$$ E \vert X_{j} \vert ^{3} = 3.6 + \sum _{k=2}^{\infty}2^{3k}P\bigl(X=2^{k} \bigr) = 3.6 + \sum_{k=2}^{\infty}2^{3k} \biggl( \frac{5.6}{2^{3k}} \biggr) = \infty , $$

and for \(\alpha \in (0,1)\),

$$\begin{aligned} E \vert X_{j} \vert ^{2+\alpha} & = 0.45\cdot 2^{2+\alpha} + \sum_{k=2}^{\infty}2^{(2+ \alpha )k}P \bigl(X=2^{k}\bigr) \\ & = 0.45\cdot 2^{2+\alpha} + \frac{5.6}{2^{2(1-\alpha )}-2^{1-\alpha}} < \infty \end{aligned}$$

for all \(j=1,2,\ldots ,n\). Let

$$ \Delta _{n} = \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} =2k) - \frac{2}{1.6529 \sqrt{2n\pi}}e^{-\frac{(2k -1.3667n)^{2}}{5.4644n}} \biggr\vert . $$

By Theorem 1.1, we have

$$ \Delta _{n} \leq \frac{A_{1}}{n^{\frac{1+\alpha}{2}}} + \frac{A_{2}}{n^{\frac{1+\alpha}{2+\alpha}}}\exp \bigl(-A_{3}n^{ \frac{\alpha}{2+\alpha}} \bigr) + \frac{A_{4}}{n^{\frac{1+\alpha}{2+\alpha}}}\exp \bigl(-A_{5}n^{ \frac{\alpha}{2+\alpha}} \bigr) $$

and Table 1.

Table 1 Explicit constants for Example 4.1

Observe that we cannot apply Theorem 1.2 with Example 4.1 since \(\upsilon _{j} = 0 \) for some \(j \geq 3\).

Example 4.2

Let \(X_{1},X_{2},\ldots ,X_{n}\) be independent random variables defined by

$$ P(X_{j} = 0) = \frac{7}{8}-\frac{1}{(2j)^{6}-(2j)^{3}}, \qquad P(X_{j} = 2j) = \frac{1}{8}, \quad \text{and}\quad P \bigl(X_{j}=(2j)^{k}\bigr) = (2j)^{-3k} $$

for integer \(k\geq 2\). We see that \(X_{1},X_{2},\ldots ,X_{n}\) are common lattice random variables with parameter \((0,2)\) and

$$\begin{aligned} &\mu _{j} = \frac{j}{4}+\frac{1}{16j^{4}-4j^{2}}, \qquad \sigma _{j}^{2} = \frac{j^{2}}{2}+\frac{1}{4j^{2}-2j} - \mu _{j}^{2}, \\ &E \vert X_{j} \vert ^{2+\alpha} = \frac{(2j)^{2+\alpha}}{8}+ \frac{1}{(2j)^{2-2\alpha}-(2j)^{1-\alpha}} \quad \text{and}\quad E \vert X_{j} \vert ^{3} = \infty . \end{aligned}$$

This implies that

$$\begin{aligned} &\frac{n^{3}}{48} \leq \sigma ^{2} \leq n^{3}, \\ &\frac{n^{3}}{6} \leq E \vert X_{j} \vert ^{2+\alpha} \leq \biggl( \frac{2^{2+\alpha}}{8} + \frac{2^{1+2\alpha}}{48(2^{1-\alpha}-1)} \biggr)n^{3+\alpha}. \end{aligned}$$

Moreover, we have that

$$ \upsilon =\min_{1\leq j \leq n} \frac{1}{4} \biggl( \frac{7}{8}- \frac{1}{(2j)^{6}-(2j)^{3}} \biggr) = \frac{3}{14}. $$

Let

$$ \Delta _{n} = \sup_{k\in \mathbb{Z}} \biggl\vert P(S_{n} =2k) - \frac{2}{\sigma \sqrt{2\pi}}e^{-\frac{(2k -\mu )^{2}}{2\sigma ^{2}}} \biggr\vert . $$

By Theorem 1.1, we have

$$ \Delta _{n} \leq \frac{B_{1}}{n^{\frac{3+3\alpha}{2}}} + \frac{B_{2}}{n^{\frac{\alpha ^{2}+3}{2}}} + \frac{B_{3}}{n^{3}}\exp \bigl(-B_{4}n^{\frac{\alpha}{2+\alpha}} \bigr) + \exp \bigl(-B_{5}n^{ \frac{\alpha}{2+\alpha}} \bigr) $$

and Table 2.

Table 2 Explicit constants for Example 4.2

Observe that we cannot apply Theorem 1.1 with Example 4.2 since \(\beta _{j} = 0 \) for \(j \geq 2\).

Availability of data and materials

Not applicable.

References

  1. Siripraparat, T., Neammanee, K.: An improvement of convergence rate in the local limit theorem for integral-valued random variables. J. Inequal. Appl. 2021, 57, 1–18 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  2. Siripraparat, T., Neammanee, K.: A local limit theorem for Poisson binomial random variables. ScienceAsia 47, 111–116 (2021)

    Article  MATH  Google Scholar 

  3. Doob, J.L.: Stochastic Processes. Wiley, New York (1953)

    MATH  Google Scholar 

  4. Prokhorov, Y.V., Rozanov, Y.A.: Probability Theory [in Russian]. Nauka, Moscow (1973)

    Google Scholar 

  5. Statulyavichus, V.A.: Limit theorems for densities and asymptotic decompositions for distributions of sums of independent random variables. Theory Probab. Appl. 10(4), 582–595 (1965)

    Article  MathSciNet  Google Scholar 

  6. Ushakov, N.: Lower and upper bounds for characteristic functions. J. Math. Sci. 84, 1179–1189 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ushakov, N.G.: Selected Topics in Characteristic Functions. VSP, Utrecht (1999)

    Book  MATH  Google Scholar 

  8. Benedicks, M.: An estimate of the modulus of the characteristic function of a lattice distribution with application to remainder term estimates in local limit theorems. Ann. Probab. 3, 162–165 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  9. Zhang, Z.: An upper bound for characteristic functions of lattice distributions with applications to survival probabilities of quantum states. J. Phys. A, Math. Theor. 40(1), 131–137 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  10. Zhang, Z.: Bound for characteristic functions and Laplace transforms of probability distributions. Theory Probab. Appl. 56(2), 350–358 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. McDonald, D.R.: The local limit theorem: a historical perspective. J. Iran. Stat. Soc. 4(2), 73–86 (2005)

    MATH  Google Scholar 

  12. Ibragimov, I.A., Linnik, I.V., Kingman, J.F.C.: Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff, Groningen (1971)

    Google Scholar 

  13. Petrov, V.V.: Sums of Independent Random Variables. Springer, Berlin (1975)

    Book  MATH  Google Scholar 

  14. Korolev, V., Zhukov, Y.V.: Convergence rate estimates in local limit theorems for Poisson random sums. Ann. Inst. Henri Poincaré 99(4), 1439–1444 (2000)

    MathSciNet  MATH  Google Scholar 

  15. Giuliano, R., Weber, M.J.G.: Approximate local limit theorems with effective rate and application to random walks in random scenery. Bernoulli 23(4B), 3268–3310 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  16. Zolotukhin, A.Y., Nagaev, S., Chebotarev, V.: On a bound of absolute constant in the Berry-Esseen inequality for i.i.d. Bernoulli random variables. Mod. Stoch. Theory Appl. 5(3), 385–410 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  17. Sunklodas, J.K.: On the approximation of a binomial random sum. Lith. Math. J. 54(3), 356–365 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley, New York (1971)

    MATH  Google Scholar 

  19. Freiman, G.A., Pitman, J.: Partitions into distinct large parts. J. Aust. Math. Soc. 57(3), 386–416 (1994)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their valuable comments and suggestions.

Funding

This work was supported by the Development and Promotion of Science and Technology Talents Project (DPST).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kritsana Neammanee.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kammoo, P., Laipaporn, K. & Neammanee, K. Local limit theorems without assuming finite third moment. J Inequal Appl 2023, 21 (2023). https://doi.org/10.1186/s13660-023-02928-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-02928-y

MSC

Keywords