Skip to main content

Explicit constants in the nonuniform local limit theorem for Poisson binomial random variables


In a recent paper the authors proved a nonuniform local limit theorem concerning normal approximation of the point probabilities \(P(S=k)\) when \(S=\sum_{i=1}^{n}X_{i}\) and \(X_{1},X_{2},\ldots ,X_{n}\) are independent Bernoulli random variables that may have different success probabilities. However, their main result contained an undetermined constant, somewhat limiting its applicability. In this paper we give a nonuniform bound in the same setting but with explicit constants. Our proof uses Stein’s method and, in particular, the K-function and concentration inequality approaches. We also prove a new uniform local limit theorem for Poisson binomial random variables that is used to help simplify the proof in the nonuniform case.

1 Introduction

Approximation of complicated distributions by simpler ones, on the basis of asymptotic theory, is a ubiquitous theme in probability and statistics. By far the most commonly used and well-known such result is the central limit theorem (CLT), which ensures the weak convergence of appropriately normalized sums of independent random variables to a standard normal distribution. Statisticians frequently invoke the CLT to construct approximate confidence intervals and hypothesis tests. Due to their widespread use, it is clearly important to understand the quality of commonly applied probability approximations as a function of the sample size.

In order to improve the quality of the normal approximation of an integer-valued random variable, it is standard to apply a continuity correction [1, 2]. Thus, if S is an integer-valued random variable with mean μ and variance \(\sigma ^{2}\), and \(Z_{\mu , \sigma ^{2}} \sim N(\mu ,\sigma ^{2})\), a continuity corrected normal approximation of \(P(a \leq S \leq b)\), \(a, b \in \mathbb{Z}\), is \(P(a-0.5 \leq Z_{\mu , \sigma ^{2}} \leq b+0.5)\).

Section 7.1 of [3] studies the accuracy of the normal approximation with continuity correction in the case where \(S=\sum_{i=1}^{n}X_{i}\) and \(X_{1},\ldots ,X_{n}\) are independent Bernoulli random variables with distributions \(P(X_{i}=1)=p_{i}=1-P(X_{i}=0)\), \(p_{i}\in (0,1)\). In this case, S is said to have a Poisson binomial distribution. It is shown, in their Theorem 7.1, that if \(\sigma ^{2} = \text{Var}(S)\) then

$$ d_{TV}\bigl(\mathcal{L}(S), \mathcal{L}(Y)\bigr) := \sup _{A\subset \mathbb{R}} \bigl\vert P(S \in A) - P(Y\in A) \bigr\vert \leq \frac{7.6}{\sigma}, $$

where \(d_{TV}\) is the total variation distance and Y is an integer-valued random variable with distribution

$$ P(Y = k) = P \biggl(\frac{k-\mu -0.5}{\sigma} < Z \leq \frac{k-\mu +0.5}{\sigma} \biggr), \quad k\in \mathbb{Z} $$

and \(Z\sim N(0,1)\). The random variable Y defined by (1.2) is said to have a discretized normal distribution with parameters μ and \(\sigma ^{2}\), written \(Y\sim N^{d}(\mu , \sigma ^{2})\). The proof of (1.1) uses Stein’s method and the zero bias coupling of [4]. [5] also considers discretized normal approximation via Stein’s method, giving bounds in the total variation distance for a wide range of examples including sums of locally dependent integer-valued random variables.

In addition to considering central limit theorems and bounds in the total variation metric, we may analyze the accuracy of a local normal approximation of the point probabilities \(P(S=k)\), when S is integer-valued, via the quantity

$$ \triangle _{k} = \biggl\vert P(S = k) - \frac{1}{\sigma \sqrt{2\pi}} \operatorname{exp} \biggl\{ -\frac{(k-\mu )^{2}}{2\sigma ^{2}} \biggr\} \biggr\vert , \quad k \in \mathbb{Z}. $$

Proving local limit theorems for a general integer-valued random variable is more delicate than proving central limit theorems as conditions are required to ensure that S does not concentrate on a lattice of span greater than 1. For example, if S is a sum of random variables that are concentrated on the even integers, then \(P(S=k)=0\) for odd k, and a normal approximation for S cannot be expected to be successful uniformly over \(\mathbb{Z}\). Consequently, local limit theorems have been comparatively less studied than central limit theorems although they came first in the historical development of probability [6].

Local limit theorems with uniform error bounds for sums of independent integer-valued random variables are studied in Chap. 7 of [7] via Fourier analysis of characteristic functions. Sufficient conditions are given that ensure \(\sup_{k\in \mathbb{Z}}\triangle _{k} = O(1/\sigma ^{2})\), which is shown to be the optimal order for the error as a function of σ. However, explicit constants are not given in the error bounds, and much of the subsequent literature on local limit theorems presents uniform bounds for \(\triangle _{k}\) using the O symbol without explicit constants. More recently, [8] and [9] give uniform bounds for \(\triangle _{k}\) with explicit constants in the cases where S has a binomial and Poisson binomial distribution respectively.

Theorem 1.1 of [10] gives a nonuniform bound for \(\triangle _{k}\) when S has a Poisson binomial distribution. It was shown that if \(\sigma ^{2} \geq 1\), then for each \(k\in \mathbb{Z}\cap [0,n]\) we have

$$ \triangle _{k} \leq \frac{Ce^{- \vert \frac{k-\mu}{\sigma} \vert }}{\sigma ^{2}} $$

for some positive absolute constant C. The main novelty in this result is the nonuniformity in k, which makes explicit how \(\triangle _{k}\) decays the further k is into the tail of the distribution, an aspect lost in previous studies that only give uniform bounds.

The presence of an undetermined constant in (1.4) somewhat limits the results applicability. We remedy this here with the following explicit nonuniform bound.

Theorem 1.1

Let \(X_{1}, X_{2},\ldots , X_{n}\) be jointly independent Bernoulli random variables such that \(P(X_{i}=1) = 1-P(X_{i} = 0)=p_{i} \in (0,1)\), and let \(S = \sum_{i=1}^{n}X_{i}\), \(\mu = \mathbb{E} S\), and \(\sigma ^{2} = \textit{Var}(S)\). If \(\sigma ^{2} \geq 5\), then for each \(k\in [0,n]\cap \mathbb{Z}\),

$$ \biggl\vert P(S = k) - \frac{1}{\sigma \sqrt{2\pi}}\operatorname{exp} \biggl\{ - \frac{(k-\mu )^{2}}{2\sigma ^{2}} \biggr\} \biggr\vert \leq \frac{e^{- \vert \frac{k-\mu}{\sigma} \vert }}{\sigma ^{2}} \biggl(C_{1} + \frac{C_{2}}{\sigma} + \frac{C_{3}}{\sigma ^{2}} \biggr), $$


$$\begin{aligned} &C_{1} = 3.15 + 7.39e^{\frac{1}{\sigma}} + 4.5e^{\frac{7}{3\sigma}}, \\ &C_{2} = 2.58 + 4.87e^{\frac{1}{\sigma}} + 4.58e^{\frac{7}{3\sigma}}, \\ &C_{3} = 0.79e^{\frac{1}{\sigma}} + 0.75e^{\frac{7}{3\sigma}}. \end{aligned}$$

A trivial corollary of Theorem 1.1 is to give a value of the constant C appearing in (1.4), albeit under a slightly more restrictive condition on \(\sigma ^{2}\). For example, if \(\sigma ^{2} \geq 5, 25\) and 100, then one may take \(C=38.6, 22.7\) and 18.4 respectively.

Our proof of Theorem 1.1 uses Stein’s method, in particular the K-function and concentration inequality approaches, which are both discussed in Sect. 2. In [10], (1.4) was proved using the zero bias coupling [4]. The use of the K-function approach here allows for a more direct determination of constants as we avoid the need to prove an intermediate result concerning normal approximation of the zero biased random variable as in Theorem 3.1 of [10]. While the use of the K-function and concentration inequalities is a standard approach for proving quantitative Berry–Esseen bounds for sums of independent random variables [3, Chap. 3], and even for locally dependent random variables [11], this paper appears to be the first to use this approach to prove a local limit theorem. The zero bias coupling still plays a role when we derive concentration inequalities in Sect. 2.2. Although some previous studies have used Stein’s method to prove local limit theorems in more general settings, they consider only uniform bounds with different approximating distributions such as the translated Poisson [12, 13] or symmetric binomial [14] distributions.

It is easily checked that the normal density function appearing in Theorem 1.1 may be replaced by the discretized normal distribution (1.2) at the cost of different constants, as we make explicit in Lemma 2.2 of Sect. 2.1. However, the formulation in terms of the normal density is in keeping with the classical literature on local limit theorems such as [7].

In our proof of Theorem 1.1, we will also make use of the following uniform local limit theorem, which we prove using the same basic approach as for Theorem 1.1.

Theorem 1.2

Under the same setup as Theorem 1.1but assuming only \(\sigma ^{2} \geq 1\), we have

$$ \sup_{k\in [0,n]\cap \mathbb{Z}} \biggl\vert P(S = k) - \frac{1}{\sigma \sqrt{2\pi}} \operatorname{exp} \biggl\{ - \frac{(k-\mu )^{2}}{2\sigma ^{2}} \biggr\} \biggr\vert \leq \frac{3.23}{\sigma ^{2}} + \frac{1.35}{\sigma ^{3}} + \frac{0.25}{\sigma ^{4}}. $$

We will not consider the question of whether a bound of the form (1.4) is optimal. It is conceivable that one could obtain a faster decaying function of k than the exponential decay in our result. Proving optimality of any such bound would likely involve more sophisticated techniques than those used in this paper, and we leave it as an interesting open problem for future research.

The remainder of the paper is structured as follows. Section 2 covers the appropriate background material, in particular Stein’s method for local limit theorems as developed in [10] required for proving our main results, as well as giving some useful auxiliary lemmas. Section 3 gives the proof of our main result, Theorem 1.1, as well as that of Theorem 1.2, while proofs of some of the auxiliary results are given in Sect. 4.

2 Background and auxiliary results

In this section we cover the necessary prerequisites and give some auxiliary results required to prove Theorem 1.1. Section 2.1 introduces Stein’s method for normal approximation and the setup of [10] required for local limit theorems. Section 2.2 introduces the zero bias coupling, which is used to derive various concentration inequalities. Section 2.3 considers properties of the solution of the Stein equation and its derivative while, finally, Sect. 2.4 introduces the K-function, which is our main technique for manipulating the Stein equation and proving Theorem 1.1.

2.1 Stein’s method for local limit theorems

Let \(\mathcal{F}\) be the set of absolutely continuous functions \(f:\mathbb{R}\to \mathbb{R}\) such that \(f'\) exists almost everywhere and \(\mathbb{E}|f'(Z)| < \infty \) where, here and for the remainder of the paper, \(Z\sim N(0,1)\). Stein’s method for normal approximation revolves around the following characterization of the normal distribution. A random variable W has a standard normal distribution if and only if

$$ \mathbb{E}\bigl\{ f'(W) - Wf(W)\bigr\} = 0, \quad f\in \mathcal{F}. $$

For a proof of this characterization, see Lemma 1 of [15] or Lemma 2.1 of [3].

Now let \(f:=f_{h}\) be the bounded solution of the ordinary differential equation

$$ f'(w) - wf(w) = h(w) - \mathbb{E} h(Z) $$

with \(h\in \mathcal{H}\), where \(\mathcal{H}\) is a class of test functions that will be chosen depending on the problem at hand. For example, suppose that we wish to bound the Kolmogorov distance

$$ d_{K}(W,Z) = \sup_{h\in \mathcal{H}_{K}} \bigl\vert \mathbb{E} h(W) - \mathbb{E} h(Z) \bigr\vert $$

between the random variable W, not necessarily normally distributed, and Z. The class of test functions in this case is


The Kolmogorov metric gives a uniform bound on the absolute differences of the distribution functions of W and Z and is the appropriate metric to consider in order to prove the Berry–Esseen theorem [3, Chap. 3]. Replacing w by W in (2.2) and taking expectations, we see that a bound on the Kolmogorov metric may be obtained from

$$ d_{K}(W, Z) \leq \sup_{h\in \mathcal{H}_{K}} \bigl\vert \mathbb{E}\bigl\{ f'(W) - Wf(W)\bigr\} \bigr\vert . $$

Boundedness properties of f and \(f'\) together with various coupling techniques that have been developed [3, Chap. 2] mean that it is often more straightforward to obtain a bound from (2.5) than to work directly with (2.3).

In order to utilize the Stein framework for our problem, we let W be a normalized version of S with mean 0 and unit variance. In particular, we let \(\xi _{i} = (X_{i}-p_{i})/\sigma \) and \(W=\sum_{i=1}^{n}\xi _{i}\) so that \(\mathbb{E}W=\mathbb{E} \xi _{i} = 0\), \(\text{Var} W = 1\), \(\text{Var} \xi _{i} = \sigma _{i}^{2}/\sigma ^{2}\) and W takes values in the set \(\mathcal{A}_{n} = \{(k-\mu )/\sigma : k\in \mathbb{Z} \cap [0,n] \}\). The set of test functions we consider is


If \(h=h_{x}\in \mathcal{H}\) with \(x=(k-\mu )/\sigma \), \(k\in \mathbb{Z} \cap [0,n]\), then we have \(\mathbb{E} h(W) = P(W=x) = P(S=k)\), and our problem is to bound

$$ \biggl\vert P(S = k) - \frac{1}{\sigma \sqrt{2\pi}}\operatorname{exp} \biggl\{ - \frac{(k-\mu )^{2}}{2\sigma ^{2}} \biggr\} \biggr\vert = \biggl\vert P(W = x) - \frac{1}{\sigma \sqrt{2\pi}}e^{-x^{2}/2} \biggr\vert . $$

The next result quantifies \(\mathbb{E} h(Z)\) and verifies that \(\mathcal{H}\) defined in (2.6) is indeed an appropriate set of test functions for proving Theorem 1.1.

Lemma 2.1

Let \(x\in \mathcal{A}_{n}\) and . If \(\sigma ^{2} \geq 1\), then \(\mathbb{E} h(Z) = (\sigma \sqrt{2\pi})^{-1}e^{-x^{2}/2} + R\), where

$$ (a)\quad \vert R \vert \leq \frac{1}{\sigma ^{2}\sqrt{2e\pi}} \quad \textit{and} \quad (b)\quad \vert R \vert \leq \frac{0.88e^{1/\sigma}e^{- \vert x \vert }}{\sigma ^{2}}. $$


Part (a) is just a restatement of Lemma 4.1 (a) in [10]. We note that (a) implies (b) if \(|x| \leq 1\) since in this case

$$ \vert R \vert \leq \frac{1}{\sigma ^{2}\sqrt{2e\pi}} = \frac{e^{ \vert x \vert }e^{- \vert x \vert }}{\sigma ^{2}\sqrt{2e\pi}} \leq \frac{e^{- \vert x \vert }}{\sigma ^{2}}\sqrt{\frac{e}{2\pi}} < \frac{0.66e^{- \vert x \vert }}{\sigma ^{2}}. $$

Thus to prove (b), we may assume that \(|x| > 1\).

By the mean value theorem for integrals, we have that \(\mathbb{E} h(Z) = \sigma ^{-1}\phi (c)\) for some \(c\in (x-1/\sigma , x)\), where \(\phi (c) = (\sqrt{2\pi})^{-1}e^{-c^{2}/2}\). Since \(|c-x| < 1/\sigma \), by the mean value theorem, \(|\phi (c)-\phi (x)| < \sigma ^{-1}|\phi '(d)|\) for some d between c and x, and thus \(d\in (x-1/\sigma , x)\). Now write \(\phi (c) =\phi (x) + R_{1}\) with \(|R_{1}| < \sigma ^{-1}|\phi '(d)|\), \(d\in (x-1/\sigma , x)\), and since \(\mathbb{E} h(Z) = \sigma ^{-1}\phi (c)\), we have

$$ \mathbb{E} h(Z) = \frac{e^{-x^{2}/2}}{\sigma \sqrt{2\pi}} + R, $$

where \(R = R_{1}/\sigma \). Now \(|\phi '(d)| \leq 2.2(\sigma \sqrt{2\pi})^{-1}e^{-|d|}\), since \(|de^{-d^{2}/2}| \leq 2.2e^{-|d|}\) for all d. As \(d\in (x-1/\sigma , x)\), we may write \(d = x + \delta \) with \(\delta \in (-1/\sigma ,0)\). Now we consider the cases \(x > 1\) and \(x< -1\).

If \(x>1\), then \(x-1/\sigma > 0\), and so \(d>0\) and \(|d|=d\). In this case then, \(|\phi '(d)| \leq 2.2(\sigma \sqrt{2\pi})^{-1}e^{-|d|} = 2.2(\sigma \sqrt{2\pi})^{-1}e^{-d} \leq 0.88e^{1/\sigma}\sigma ^{-1}e^{-x} \).

If \(x < -1\), then \(|d|=-d\), and so \(|\phi '(d)| \leq 2.2(\sigma \sqrt{2\pi})^{-1}e^{-|d|} \leq 0.88 \sigma ^{-1}e^{\delta}e^{x} \leq 0.88\sigma ^{-1}e^{-|x|}\). As \(R = R_{1}/\sigma \) and \(|R_{1}| < \sigma ^{-1}|\phi '(d)|\), this completes the proof. □

As a consequence of Lemma 2.1, we have that for each \(x\in \mathcal{A}_{n}\)

$$\begin{aligned} & \biggl\vert P(W = x) - \frac{1}{\sigma \sqrt{2\pi}}e^{-x^{2}/2} \biggr\vert \leq \bigl\vert \mathbb{E}\bigl\{ f'(W) - Wf(W)\bigr\} \bigr\vert + \frac{1}{\sigma ^{2}\sqrt{2e\pi}} \end{aligned}$$


$$\begin{aligned} & \biggl\vert P(W = x) - \frac{1}{\sigma \sqrt{2\pi}}e^{-x^{2}/2} \biggr\vert \leq \bigl\vert \mathbb{E}\bigl\{ f'(W) - Wf(W) \bigr\} \bigr\vert + \frac{0.88e^{1/\sigma}e^{- \vert x \vert }}{\sigma ^{2}}, \end{aligned}$$

where \(f := f_{x}\) is the bounded solution of

$$ f'(w) - wf(w) = h_{x}(w) - Nh $$

with and \(Nh = \mathbb{E}h_{x}(Z)\).

Our problem then is reduced to bounding \(|\mathbb{E}\{f'(W) - Wf(W)\}|\) with f the bounded solution of (2.10). Our approach to bounding this quantity is discussed in Sect. 2.4. Before we can deal with this problem, we need to acquire some further auxiliary results in Sects. 2.2 and 2.3.

We end this section by making explicit the fact that Theorems 1.1 and 1.2 imply analogous results with the discretized normal distribution replacing the normal density.

Lemma 2.2

Let Y have a discretized normal distribution with parameters μ and \(\sigma ^{2}\) as defined by (1.2). Then

$$\begin{aligned} &(a)\quad \biggl\vert P(Y=k) - \frac{1}{\sigma \sqrt{2\pi}} \operatorname{exp} \biggl\{ -\frac{(k-\mu )^{2}}{2\sigma ^{2}} \biggr\} \biggr\vert \leq \frac{1}{\sigma ^{2}\sqrt{2e\pi}}. \end{aligned}$$

If \(\sigma ^{2} \geq 5\), then

$$\begin{aligned} &(b)\quad \biggl\vert P(Y=k) - \frac{1}{\sigma \sqrt{2\pi}} \operatorname{exp} \biggl\{ - \frac{(k-\mu )^{2}}{2\sigma ^{2}} \biggr\} \biggr\vert \leq \frac{1.11e^{- \vert \frac{k-\mu}{\sigma} \vert }}{\sigma ^{2}}. \end{aligned}$$


The proof follows in essentially the same way as that of Lemma 2.1, and we omit the details. □

2.2 Concentration inequalities via the zero bias coupling

In this section we derive some concentration inequalities for \(P(a \leq W \leq b)\) and give bounds for the point probabilities \(P(W=x)\), \(x\in \mathcal{A}_{n}\). We recall that if Y is a zero mean random variable with \(\text{Var}(Y) = \sigma ^{2}_{Y}\), then the random variable \(Y^{*}\) is said to have the Y-zero biased distribution if

$$ \sigma ^{2}_{Y}\mathbb{E}f' \bigl(Y^{*}\bigr) = \mathbb{E} Yf(Y) $$

for all absolutely continuous functions f such that the above expectations exist. The notion of zero biasing was introduced in [4], where the existence of \(Y^{*}\) was established for any mean zero random variable Y. For further applications of the zero bias coupling beyond Berry-Esseen bounds in the classical central limit theorem, see [16] and [17]. Throughout the remainder of this section, an asterisk * in the exponent of a random variable denotes a random variable with the corresponding zero biased distribution.

In our setting, from Lemma 2.1 of [4], \(W^{*}\) may be constructed on the same space as W by setting \(W^{*}= W - \xi _{I} + \xi _{I}^{*}\), where I is a random index with distribution \(P(I=i)=\sigma _{i}^{2}/\sigma ^{2}\), \(1\leq i \leq n\). It may be shown [3, p. 29] that \(\xi _{i}^{*}\) is uniformly distributed on \([-p_{i}/\sigma , (1-p_{i})/\sigma ]\), and thus, as \(\xi _{i}\) and \(\xi _{i}^{*}\) have the same support and \(W-W^{*} = \xi _{I}-\xi _{I}^{*}\), we have that

$$ \bigl\vert W-W^{*} \bigr\vert \leq \frac{1}{\sigma}. $$

[10] use (2.12) to prove a nonuniform bound on the local normal approximation of \(W^{*}\) that forms one of the key steps in the proof of their Theorem 1.1. Here, we will prove concentration inequalities for \(W^{*}\) and use these together with (2.12) to obtain concentration inequalities for W.

Choosing the function f in (2.11) such that and \(f(\frac{a+b}{2}) = 0\), for \(a \leq b\), it is shown in Lemma 3.2 of [17] that for any random variable Y with \(\mathbb{E}(Y) = 0\) and \(\text{Var}(Y) = \sigma ^{2}_{Y}\),

$$ P\bigl(a \leq Y^{*} \leq b\bigr) \leq \frac{b-a}{2\sigma _{Y}}. $$

We now use this to obtain uniform concentration inequalities for W and \(W^{(i)} = W - \xi _{i}\).

Lemma 2.3

For all \(a \leq b\), we have

$$\begin{aligned} & P(a \leq W \leq b) \leq \frac{b-a}{2}+ \frac{1}{\sigma}. \end{aligned}$$

Moreover, if \(\sigma ^{2} \geq 5\), we have

$$\begin{aligned} & P\bigl(a\leq W^{(i)} \leq b\bigr) \leq \frac{b-a}{1.94}+ \frac{1.03}{\sigma}, \end{aligned}$$

where \(W^{(i)} = W - \xi _{i}\).


From (2.12), (2.13) and the fact that \(\sigma ^{2}_{W} = \text{Var}(W) =1\), we have

$$ P(a \leq W \leq b) \leq P \biggl(a-\frac{1}{\sigma} \leq W^{*} \leq b+ \frac{1}{\sigma} \biggr) \leq \frac{b-a}{2}+ \frac{1}{\sigma}, $$

which is (2.14). For (2.15) we have that

$$ \sigma _{W^{(i)}} = \sqrt{1- \frac{\sigma _{i}^{2}}{\sigma ^{2}}} = \sqrt{1 - \frac{p_{i}(1-p_{i})}{\sigma ^{2}}} \geq \sqrt{1 - \frac{1}{4\sigma ^{2}}} \geq \sqrt{ \frac{19}{20}} \geq 0.974, $$

and so using this in (2.13) with \(|W^{(i)} - W^{(i)*}| \leq 1/\sigma \) gives

$$ P\bigl( a\leq W^{(i)} \leq b\bigr) \leq P \biggl(a - \frac{1}{\sigma} \leq W^{(i)*} \leq b + \frac{1}{\sigma} \biggr) \leq \frac{b-a}{1.94}+ \frac{1.03}{\sigma}, $$

as required. □

Lemma 2.3 may be used to uniformly bound \(P(W=x)\), e.g., by writing \(P(W=x)=P(x-\epsilon /2 \leq W \leq x+\epsilon /2)\) for small positive ϵ and letting \(\epsilon \to 0^{+}\). However, this approach gives a worse constant than that of [18], which we state in Lemma 2.4 below together with an analogous result for \(P(W^{(i)}=x)\). As before, \(\mathcal{A}_{n}\) denotes the support of W, and we will denote the support of \(W^{(i)}\) by \(\mathcal{A}_{n}^{(i)}\) so that \(\mathcal{A}_{n}^{(i)} = \{(k-\mu ^{(i)})/\sigma : k \in [0, k-1] \cap \mathbb{Z}\}\) where \(\mu ^{(i)}= \mu - p_{i}\).

Lemma 2.4

The following uniform bound holds:

$$\begin{aligned} & \sup_{k\in [0,n]\cap \mathbb{Z}}P(S=k) = \sup_{x\in \mathcal{A}_{n}}P(W=x) \leq \frac{0.5}{\sigma}. \end{aligned}$$

Moreover if \(\sigma ^{2} \geq 1\) then

$$\begin{aligned} & \sup_{k\in [0,n-1] \cap \mathbb{Z}}P\bigl(S^{(i)}=k \bigr) = \sup_{x\in \mathcal{A}_{n}^{(i)}}P\bigl(W^{(i)}=x\bigr) \leq \frac{0.58}{\sigma}, \end{aligned}$$

where \(S^{(i)} = S - X_{i}\).


The bound (2.16) is given in Lemma 1 from [18]. Now, as \(S^{(i)}\) is also a Poisson binomial random variable, we have from (2.16) and the fact that \(\sigma \geq 1\) that

$$ P\bigl(S^{(i)}=k\bigr) \leq \frac{0.5}{\sqrt{\sigma ^{2} - p_{i}(1-p_{i})}} \leq \frac{0.5}{\sqrt{\sigma ^{2} - 1/4}} \leq \frac{1}{\sigma \sqrt{3}} $$

for each \(k\in [0,n-1]\cap \mathbb{Z}\), which implies (2.17). □

Lemma 2.5 and Corollary 2.1 below are nonuniform versions of Lemmas 2.3 and 2.4. Before stating these results, we recall from Lemma 3.3 of [10] that for each \(m\in \mathbb{N}\) we have the bound \(\mathbb{E}|W|^{2m} \leq p(2m)\), uniformly in n, where \(p(2m)\) is the number of partitions of 2m, i.e., the number of ways that 2m may be written as a sum of positive integers irrespective of order. Since \(p(2m)^{1/2m}\to 1\) as \(m\to \infty \) [19, Sect. 6.4], it follows that, uniformly in n,

$$ \limsup \bigl(\mathbb{E} \vert W \vert ^{2m} \bigr)^{1/2m} \leq 1. $$

The same bound holds when W is replaced by \(W^{(i)} = W - \xi _{i}\).

We also make use of the fact that if \(\sigma ^{2} \geq A^{2}\) then \(\xi _{i} \leq 1/A\) for each \(1\leq i \leq n\), and so by Lemma 8.1 in [3] with \(\alpha = 1/A\) and \(B^{2}=1\), we have that for each \(t > 0\)

$$ \mathbb{E}e^{tW} \leq \text{exp}\bigl\{ A^{2} \bigl(e^{t/A} - 1 - t/A\bigr)\bigr\} . $$

In particular, letting \(t = 2m/(2m-1)\) for \(m\in \mathbb{N}\), we find that, uniformly in n,

$$ \limsup \bigl(\mathbb{E}e^{\frac{2m}{2m-1}W} \bigr)^{\frac{2m-1}{2m}} \leq \text{exp}\bigl\{ A^{2}\bigl(e^{1/A}-1-1/A\bigr) \bigr\} , $$

with the same bound holding when W is replaced by \(W^{(i)}\).

Lemma 2.5

If \(\sigma ^{2} \geq 5\) then for \(0 \leq a < b\), we have

$$\begin{aligned} & P(a \leq W \leq b) \leq 1.8\bigl(1+e^{1/\sigma}\bigr)e^{-a} \biggl(b-a + \frac{1}{\sigma} \biggr) \end{aligned}$$


$$\begin{aligned} & P\bigl(a \leq W^{(i)} \leq b\bigr) \leq 1.9 \bigl(1+e^{1/\sigma}\bigr)e^{-a} \biggl(b-a + \frac{1}{\sigma} \biggr). \end{aligned}$$


Define the function \(g:\mathbb{R}\to \mathbb{R}\) by

$$ g(w) = \textstyle\begin{cases} 0, & w < a, \\ e^{w}(w-a), & a \leq w \leq b, \\ e^{w}(b-a), & w > b, \end{cases} $$

for which we have \(g'(w) \geq 0\), for \(w\in \mathbb{R}\) and \(g'(w) \geq e^{a}\), for \(w \in [a, b]\). We also have \(0 \leq g(w) \leq (b-a)e^{w}\) for \(w\in \mathbb{R}\). It follows that



By Holder’s inequality, for all \(m\in \mathbb{N}\),

Letting \(m\to \infty \) and applying (2.19) and (2.21) with \(A=\sqrt{5}\), we get

and hence

$$ 0 \leq \mathbb{E} Wg(W) \leq 1.8(b-a). $$

Now, as \(\mathbb{E}g'(W^{*}) = \mathbb{E}Wg(W)\), we get from (2.24) and (2.25)

$$ P\bigl(a \leq W^{*} \leq b\bigr) \leq 1.8e^{-a}(b-a), \quad 0\leq a < b. $$

Since \(|W-W^{*}| \leq 1/\sigma \), we have, assuming \(a \geq 1/\sigma \), that

$$\begin{aligned} P(a \leq W \leq b) &= P\bigl(a \leq W \leq b, W > W^{*}\bigr) + P \bigl(a \leq W \leq b, W \leq W^{*}\bigr) \\ & \leq P \biggl(a-\frac{1}{\sigma} \leq W^{*} \leq b \biggr) + P \biggl(a \leq W^{*} \leq b + \frac{1}{\sigma} \biggr) \\ & \leq 1.8e^{1/\sigma}e^{-a} \biggl(b-a + \frac{1}{\sigma} \biggr) + 1.8e^{-a} \biggl(b-a + \frac{1}{\sigma} \biggr) \\ &= 1.8\bigl(1+e^{1/\sigma}\bigr)e^{-a} \biggl(b-a + \frac{1}{\sigma} \biggr). \end{aligned}$$

The assumption \(a \geq 1/\sigma \) was required in (2.27) to ensure that (2.26) could be applied. If \(0 \leq a < 1/\sigma \), then \(a\in [0,1/\sqrt{5})\), and the result follows from Lemma 2.3 since

$$ P(a\leq W \leq b) \leq \frac{b-a}{2} + \frac{1}{\sigma} = e^{a}e^{-a} \biggl(\frac{b-a}{2} + \frac{1}{\sigma} \biggr) \leq 1.6e^{-a} \biggl( \frac{b-a}{2} + \frac{1}{\sigma} \biggr). $$

The proof of (2.23) follows in the same way except that, as \(\text{Var}(W^{(i)}) \neq 1\), prior to (2.26) we must use that \(\sigma _{W^{(i)}}^{2}\mathbb{E}g'(W^{(i)*}) = \mathbb{E}W^{(i)}g(W^{(i)})\) and the fact that \(\sigma _{W^{(i)}}^{2} \geq (\sqrt{19/20})^{2} = 0.95\), as shown in the proof of Lemma 2.3. □

Remark 1

It is clear from the proof of Lemma 2.5 that (2.22) holds more generally whenever \(W=\sum_{i=1}^{n}\xi _{i}\), \(\text{Var}(W)=1\) with \(\mathbb{E} \xi _{i}=0\) and \(\xi _{i} \leq 1/\sqrt{5}\). Thus, if a and b are both negative, we have

$$ \begin{aligned} &P(a\leq W \leq b) = P\bigl( \vert b \vert \leq -W \leq \vert a \vert \bigr) \leq 1.8\bigl(1+e^{1/\sigma}\bigr)e^{- \vert b \vert } \biggl(b-a + \frac{1}{\sigma} \biggr), \\ &\quad a < b \leq 0. \end{aligned} $$

Arguing as in the paragraph prior to Lemma 2.4, we obtain from Lemma 2.5 the following nonuniform bound on the point probabilities \(P(W=x)\), \(x\in \mathcal{A}_{n}\) and \(P(W^{(i)}=x)\), \(x\in \mathcal{A}_{n}^{(i)}\).

Corollary 2.1

For \(x\in \mathcal{A}_{n}\) with \(x=(k-\mu )/\sigma \), \(k\in [0,n]\cap \mathbb{Z}\), we have

$$\begin{aligned} & P(W = x) = P(S=k) \leq 1.8\bigl(1+e^{1/\sigma}\bigr) \frac{e^{-|x|}}{\sigma}. \end{aligned}$$

Similarly, for \(x\in \mathcal{A}_{n}^{(i)}\), we have

$$\begin{aligned} & P\bigl(W^{(i)} = x\bigr) \leq 1.9 \bigl(1+e^{1/\sigma}\bigr)\frac{e^{-|x|}}{\sigma}. \end{aligned}$$

2.3 The Stein equation

In this section we consider the properties of the function f, which is the bounded solution to the Stein equation (2.10). For the remainder of the paper, unless otherwise stated, it may be assumed that \(x\in \mathcal{A}_{n}\) where \(\mathcal{A}_{n}\) is as defined in Sect. 2.1. We first recall the following basic properties of f from Lemma 3.2 in [10].

Lemma 2.6

Let \(f:=f_{x}\) be the bounded solution of (2.10). Then

  1. (a)

    \(0\leq f'(w) \leq 1\), \(w\in (x-1/\sigma , x]\),

  2. (b)

    f is continuous, increasing on the interval \(w\in (x-1/\sigma , x]\) and decreasing otherwise,

  3. (c)

    if \(\sigma ^{2} \geq 1\), we have

    $$ \bigl\vert f(w) \bigr\vert \leq \frac{1}{\sigma}, \quad w\in \mathbb{R}. $$

It was also shown in [10] that the term Nh appearing in (2.10) is bounded above by \(C\sigma ^{-1}e^{-|x|}\) for some absolute positive constant C. We now quantify the value of C.

Lemma 2.7

Let \(Nh = P(x-1/\sigma < Z \leq x)\), where \(Z\sim N(0,1)\). Then,

  1. (a)

    \(Nh \leq \frac{1.03e^{-|x|}}{\sigma}\) if \(\sigma ^{2} \geq 5\),

  2. (b)

    \(Nh \leq \frac{0.4e^{-x^{2}/2}}{\sigma}\) for all \(\sigma > 0\) when \(x \leq 0\).


For (a), we divide the proof into three cases according to whether \(x > 1/\sqrt{5}\), \(|x| \leq 1/\sqrt{5}\) or \(x < -1/\sqrt{5}\).

Case 1: \(x > 1/\sqrt{5}\). In this case, since \(\sigma \geq \sqrt{5}\), we have \(x-1/\sigma > 0\) and \(Nh \leq (\sigma \sqrt{2\pi})^{-1} e^{-\frac{1}{2}(x-\frac{1}{\sigma})^{2} }\). Since \(e^{-t^{2}/2} \leq e^{1/2}e^{-t}\), when \(t>0\), we have \(Nh \leq (\sigma \sqrt{2\pi})^{-1}e^{1/2}e^{-(x-1/\sigma )} = e^{ \frac{1}{2}+\frac{1}{\sigma}} (\sqrt{2\pi})^{-1}\frac{e^{-x}}{\sigma}\). Since \((\sqrt{2\pi})^{-1}e^{1/2 + 1/\sqrt{5}} < 1.03\), (a) holds when \(x > 1/\sqrt{5}\).

Case 2: \(|x| \leq 1/\sqrt{5}\). Since \(Nh \leq (\sigma \sqrt{2\pi})^{-1}\), we have for all \(|x|\leq 1/\sqrt{5}\) that

$$ Nh \leq \frac{1}{\sigma \sqrt{2\pi}}e^{|x|}e^{-|x|} \leq \frac{0.63e^{-|x|}}{\sigma}, $$

which holds for all \(\sigma > 0\).

Case 3: \(x < -1/\sqrt{5}\). In this case we have

$$\begin{aligned} Nh &= P(x-1/\sigma < Z \leq x) = P\bigl( \vert x \vert \leq -Z < \vert x \vert +1/\sigma \bigr) = P\bigl( \vert x \vert < Z \leq \vert x \vert +1/\sigma \bigr) \\ &\leq \frac{e^{-x^{2}/2}}{\sigma \sqrt{2\pi}} \leq \frac{e^{1/2}e^{- \vert x \vert }}{\sigma \sqrt{2\pi}} \leq \frac{0.66e^{- \vert x \vert }}{\sigma} \end{aligned}$$

valid for all \(\sigma > 0\), where we used the fact that \(e^{-x^{2}/2} \leq e^{1/2}e^{-|x|}\). This completes the proof of (a).

For (b), noting that the working in Case 3 above holds whenever \(x \leq 0\), we have \(Nh \leq (\sigma \sqrt{2\pi})^{-1}e^{-x^{2}/2} \leq 0.4\sigma ^{-1}e^{-x^{2}/2}\). □

We recall, from equation (3.6) of [10], that the unique bounded solution f, of (2.10), may be written as

$$ f(w) = \textstyle\begin{cases} (\sqrt{2\pi})Nh e^{w^{2}/2} [1 - \Phi (w) ], & w > x, \\ (\sqrt{2\pi})e^{w^{2}/2} [\Phi (w)(1 - \Phi (x)) - \Phi (x- \frac{1}{\sigma})(1-\Phi (w)) ], & w \in (x - \frac{1}{\sigma}, x], \\ -(\sqrt{2\pi})Nhe^{w^{2}/2}\Phi (w), & w \leq x - \frac{1}{\sigma}, \end{cases} $$

and so

$$\begin{aligned} f'(w) & = (\sqrt{2\pi})Nh we^{w^{2}/2} \bigl[1 - \Phi (w) \bigr] - Nh, \quad w > x, \end{aligned}$$


$$\begin{aligned} f'(w) & = -(\sqrt{2\pi})Nhwe^{w^{2}/2} \Phi (w) - Nh, \quad w \leq x - \frac{1}{\sigma}. \end{aligned}$$

We will not need to make use of the explicit expression for \(f'(w)\) for \(w \in (x - \frac{1}{\sigma}, x]\); it will suffice to know that \(0 \leq f'(w) \leq 1\) in this case. For \(w\notin (x - \frac{1}{\sigma}, x]\), we know from Lemma 2.6 (b) that \(f'(w) < 0\) and together with Lemma 2.4 in [3] we have \(-2 \leq f'(w) \leq 0\) in this case. Our next result, Lemma 2.8, gives more detailed bounds on \(|f'(w)|\). We first recall the standard Gaussian tail bounds [3, p. 37 & 38]

$$ \frac{we^{-w^{2}/2}}{\sqrt{2\pi}(1+w^{2})} \leq 1 - \Phi (w) \leq \frac{e^{-w^{2}/2}}{w\sqrt{2\pi}}, \quad w > 0 $$


$$ \frac{ \vert w \vert e^{-w^{2}/2}}{(1+w^{2})\sqrt{2\pi}} \leq \Phi (w) \leq \frac{e^{-w^{2}/2}}{ \vert w \vert \sqrt{2\pi}}, \quad w < 0. $$

Lemma 2.8

Let f be the bounded solution of (2.10).

(a) If \(x \geq 0\) then

$$\begin{aligned} & \bigl\vert f'(w) \bigr\vert \leq \frac{Nh}{1+w^{2}}, \quad w > x. \end{aligned}$$

(b) If \(x > 1/\sigma \) then

$$\begin{aligned} (i)\quad & \bigl\vert f'(w) \bigr\vert \leq \frac{Nh}{1+w^{2}}, \quad w \leq 0, \\ (ii)\quad & \bigl\vert f'(w) \bigr\vert \leq e^{\frac{8}{7}+\frac{1}{\sigma}} \frac{ \vert w \vert e^{-x}}{\sigma} + Nh, \quad w \in \biggl(0, \frac{3}{4}(x-1/ \sigma ) \biggr), \\ (iii)\quad & \bigl\vert f'(w) \bigr\vert \leq \frac{ \vert w \vert }{\sigma} + Nh, \quad w \in (0, x-1/\sigma ]. \end{aligned}$$

(c) If \(0 \leq x \leq 1/\sigma \) then

$$\begin{aligned} & \bigl\vert f'(w) \bigr\vert \leq \frac{Nh}{1+w^{2}}, \quad w \leq x-1/\sigma . \end{aligned}$$

(d) If \(x < 0\) then

$$\begin{aligned} (i)\quad & \bigl\vert f'(w) \bigr\vert \leq e^{ \frac{8}{7}}\frac{e^{- \vert x \vert }}{\sigma} + Nh, \quad w\in (3x/4, 0], \\ (ii)\quad & \bigl\vert f'(w) \bigr\vert \leq \frac{ \vert w \vert }{\sigma} + Nh, \quad w\in (x, 0], \\ (iii)\quad & \bigl\vert f'(w) \bigr\vert \leq \frac{Nh}{1+w^{2}}, \quad w\in (-\infty , x-1/\sigma ]\cup (0,\infty ). \end{aligned}$$


(a) is immediate from (2.33) together with the tail bounds (2.35).

For (b), when \(w \leq 0\), we have from (2.34) that \(f'(w) = \sqrt{2\pi}Nh|w|e^{w^{2}/2}[1-\Phi (|w|)] - Nh\) and (i) again follows from (2.35).

For (ii), as \(Nh \leq (\sigma \sqrt{2\pi})^{-1}e^{-\frac{1}{2}(x-\frac{1}{\sigma})^{2}}\) in this case and \(e^{w^{2}/2} \leq e^{\frac{9}{32}(x-\frac{1}{\sigma})^{2}}\), we have

$$ \sqrt{2\pi}Nhe^{w^{2}/2} \leq \frac{e^{-\frac{7}{32}(x-\frac{1}{\sigma})^{2}}}{\sigma} \leq e^{8/7} \frac{e^{-(x-1/\sigma )}}{\sigma} $$

as \(e^{-7t^{2}/32} \leq e^{8/7}e^{-t}\) for \(t \geq 0\). The result then follows from (2.34).

For (iii), use (2.34) together with \(Nh \leq (\sigma \sqrt{2\pi})^{-1}e^{-\frac{1}{2}(x-\frac{1}{\sigma})^{2}}\) and \(e^{w^{2}/2} \leq e^{\frac{1}{2}(x-\frac{1}{\sigma})^{2}}\) for \(w \in (0, x-1/\sigma )\).

For (c), if \(w \leq x-1/\sigma \) then \(w \leq 0\), and we may write, from (2.34), \(f'(w) = \sqrt{2\pi}Nh|w| e^{w^{2}/2}[1-\Phi (|w|)] - Nh\), and we again get the result from (2.35).

(d)(i) follows in essentially the same way as (b)(ii) but now using (2.33) with \(Nh \leq (\sigma \sqrt{2\pi})^{-1}e^{-x^{2}/2}\) and \(e^{-7t^{2}/32} \leq e^{8/7}e^{-|t|}\).

(d)(ii) follows in essentially the same way as (b)(iii) but using (2.33).

For (d)(iii), if \(w \leq x-1/\sigma \) then \(w < 0\), and we get the result in this case as in (c), while for \(w > 0\), we get the result as in (a). □

We now use Lemma 2.8 to give \(O(1/\sigma )\) bounds on \(|\mathbb{E}f'(\bar{W})|\) when is a random variable that is sufficiently close to W.

Lemma 2.9

If \(\sigma ^{2} \geq 1\) and is a random variable strictly between \(W^{(i)} - p_{i}/\sigma \) and \(W^{(i)} + (1-p_{i})/\sigma \), then

$$\begin{aligned} (a) \quad & \bigl\vert \mathbb{E}f'(\bar{W}) \bigr\vert \leq \frac{1.4}{\sigma} + \frac{1}{\sigma ^{2}}. \end{aligned}$$

Furthermore, if \(\sigma ^{2} \geq 5\) and \(\vert x \vert \geq 1/\sigma \), then

$$\begin{aligned} (b) \quad & \bigl\vert \mathbb{E} f'(\bar{W}) \bigr\vert \leq \bigl(3e^{ \frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 2.06\bigr) \frac{e^{- \vert x \vert }}{\sigma} + \bigl(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}}\bigr) \frac{e^{- \vert x \vert }}{\sigma ^{2}}. \end{aligned}$$


The proof is given in Sect. 4.1. □

We now give our final two auxiliary results required to prove Theorems 1.1 and 1.2.

Lemma 2.10

If \(\sigma ^{2} \geq 1\) and is a random variable strictly between W and \(W^{(i)}+t\) with \(t\in [-p_{i}/\sigma , (1-p_{i})/\sigma ]\), then

$$\begin{aligned} (a) \quad & \bigl\vert \mathbb{E}W^{(i)}f'(\bar{W}) \bigr\vert \leq \frac{1.98}{\sigma} + \frac{1}{\sigma ^{2}}. \end{aligned}$$

If we also have \(\sigma ^{2} \geq 5\) and \(\vert x \vert > 1.5\), then

$$\begin{aligned} (b) \quad & \bigl\vert \mathbb{E} W^{(i)}f'( \bar{W}) \bigr\vert \leq \bigl(3e^{ \frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 1.35 \bigr) \frac{e^{-x}}{\sigma} + \bigl(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 2.06\bigr)\frac{e^{-x}}{\sigma ^{2}}. \end{aligned}$$


The proof is given in Sect. 4.2. □

Lemma 2.11

If \(\sigma ^{2} \geq 5\) and \(t \in (-p_{i}/\sigma , (1-p_{i})/\sigma ]\), then

$$ \bigl\vert \mathbb{E}f\bigl(W^{(i)} + t\bigr) \bigr\vert \leq \bigl(3e^{\frac{7}{3\sigma}} + e^{ \frac{8}{7}+\frac{1}{\sigma}} \bigr)\frac{e^{- \vert x \vert }}{\sigma} + 1.9e^{ \frac{1}{\sigma}} \bigl(1+e^{\frac{1}{\sigma}} \bigr) \frac{e^{- \vert x \vert }}{\sigma ^{2}}. $$


The proof is given in Sect. 4.3. □

2.4 The K-function

As discussed in Sect. 2.1, our problem reduces to bounding \(|\mathbb{E}\mathbb{\{}f'(W)-Wf(W)\}|\), where f is the bounded solution of the Stein equation (2.10). To this end, define the functions \(K_{i}\), \(1\leq i \leq n\), by

By Fubini’s theorem we find that

$$ \int _{-\infty}^{\infty}K_{i}(t)\,dt = \mathbb{E} \xi _{i}^{2}, \quad \text{and}\quad \int _{-\infty}^{\infty} \vert t \vert K_{i}(t)\,dt = \frac{1}{2} \mathbb{E} \vert \xi _{i} \vert ^{3}, $$

and so

$$ \sum_{i=1}^{n} \int _{-\infty}^{\infty}K_{i}(t)\,dt = 1 \quad \text{and} \quad \sum_{i=1}^{n} \int _{-\infty}^{\infty} \vert t \vert K_{i}(t)\,dt \leq \frac{1}{2\sigma} $$


$$ \sum_{i=1}^{n}\mathbb{E} \vert \xi _{i} \vert ^{3} \leq \frac{1}{\sigma}\sum _{i=1}^{n} \mathbb{E} \vert \xi _{i} \vert ^{2} = \frac{1}{\sigma}. $$

Writing \(W^{(i)} = W-\xi _{i}\), it is shown in Sect. 2.3.1 of [3] that

$$ \mathbb{E}\bigl\{ f'(W) - Wf(W)\bigr\} = \sum _{i=1}^{n} \int _{-\infty}^{\infty} \mathbb{E} \bigl\{ f'\bigl(W^{(i)} + \xi _{i}\bigr) - f'\bigl(W^{(i)} + t\bigr) \bigr\} K_{i}(t) \,dt. $$

As f is the solution of (2.10), we may decompose the right-hand side of (2.45) as

$$\begin{aligned} & \sum_{i=1}^{n} \int _{-\infty}^{\infty }\mathbb{E} W^{(i)} \bigl[f\bigl(W^{(i)} + \xi _{i}\bigr) - f \bigl(W^{(i)} + t\bigr)\bigr]K_{i}(t)\,dt \end{aligned}$$
$$\begin{aligned} &\quad + \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} \xi _{i}f\bigl(W^{(i)}+ \xi _{i} \bigr)K_{i}(t)\,dt \end{aligned}$$
$$\begin{aligned} &\quad - \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} tf \bigl(W^{(i)}+t\bigr)K_{i}(t)\,dt \end{aligned}$$
$$\begin{aligned} &\quad + \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} \bigl(h_{x} \bigl(W^{(i)} + \xi _{i}\bigr) - h_{x} \bigl(W^{(i)} + t\bigr) \bigr)K_{i}(t)\,dt. \end{aligned}$$

Since \(\xi _{i} \in \{-p_{i}/\sigma , (1-p_{i})/\sigma \}\), we see that \(K_{i}(t)\neq 0\) requires that \(t \in [-p_{i}/\sigma , (1-p_{i})/\sigma ]\). Thus in bounding (2.46)–(2.49), for each i, \(1 \leq i \leq n\), we may restrict our attention to \(t \in [-p_{i}/\sigma , (1-p_{i})/\sigma ]\) as otherwise the integrands are zero. In particular this condition implies that \(|t| < 1/\sigma \) and \(|\xi _{i} - t| \leq 1/\sigma \).

3 Proofs of main results

We now give our proofs of Theorems 1.1 and 1.2, starting with Theorem 1.2, which is then used to simplify the proof of Theorem 1.1. We use the K-function approach and notation from Sect. 2.4. Our problem is to bound the four terms (2.46)–(2.49), and we will consider each term in turn.

3.1 Proof of Theorem 1.2

Bounding (2.46): For (2.46), there is a random between \(W^{(i)} + \xi _{i}\) and \(W^{(i)} + t\) such that \(f(W^{(i)} + \xi _{i}) - f(W^{(i)} + t) = f'(\bar{W})(\xi _{i} - t)\). Since for each i we only need to consider t such that \(|\xi _{i} - t| \leq 1/\sigma \), we may bound (2.46) as

$$\begin{aligned} & \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty }\mathbb{E} W^{(i)} \bigl[f\bigl(W^{(i)} + \xi _{i}\bigr) - f \bigl(W^{(i)} + t\bigr)\bigr]K_{i}(t)\,dt \Biggr\vert \\ &\quad \leq \frac{1}{\sigma}\sum_{i=1}^{n} \int _{-\infty}^{\infty} \bigl\vert \mathbb{E} W^{(i)}f'(\bar{W}) \bigr\vert K_{i}(t) \,dt = \frac{1}{\sigma}\sum_{i=1}^{n} \bigl\vert \mathbb{E} W^{(i)}f'(\bar{W}) \bigr\vert \int _{-\infty}^{\infty}K_{i}(t)\,dt \\ &\quad \leq \frac{1.98}{\sigma ^{2}} + \frac{1}{\sigma ^{3}} \end{aligned}$$

by (2.40).

Bounding (2.47): As \(\xi _{i} = (1-p_{i})/\sigma \) with probability \(p_{i}\) and \(\xi _{i} = -p_{i}/\sigma \) with probability \(1-p_{i}\), we have that

$$\begin{aligned} \bigl\vert \mathbb{E}\xi _{i}f\bigl(W^{(i)}+\xi _{i}\bigr) \bigr\vert &= \biggl\vert \frac{(1-p_{i})}{\sigma} \mathbb{E} f \biggl(W^{(i)} + \frac{1-p_{i}}{\sigma} \biggr)p_{i} - \frac{p_{i}}{\sigma}\mathbb{E} f \biggl(W^{(i)} - \frac{p_{i}}{\sigma} \biggr) (1-p_{i}) \biggr\vert \\ & = \frac{p_{i}(1-p_{i})}{\sigma}\mathbb{E} \biggl\{ \biggl\vert f \biggl(W^{(i)} + \frac{1-p_{i}}{\sigma} \biggr) - f \biggl(W^{(i)} - \frac{p_{i}}{\sigma} \biggr) \biggr\vert \biggr\} \\ & = \frac{p_{i}(1-p_{i})}{\sigma}\mathbb{E} \biggl\{ \frac{1}{\sigma} \bigl\vert f'( \bar{W}) \bigr\vert \biggr\} \end{aligned}$$

for some random variable strictly between \(W^{(i)} + (1-p_{i})/\sigma \) and \(W^{(i)} - p_{i}/\sigma \). Now, by Lemma 2.9 (a) and the fact that \(p(1-p) \in (0,1/4]\) when \(p\in (0,1)\), (2.47) may be bounded as

$$ \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} \xi _{i}f\bigl(W^{(i)}+ \xi _{i} \bigr)K_{i}(t)\,dt \Biggr\vert \leq \frac{1}{4\sigma ^{2}}\sum _{i=1}^{n} \bigl\vert \mathbb{E}f'(\bar{W}) \bigr\vert \int _{-\infty}^{\infty}K_{i}(t)\,dt \leq \frac{0.35}{\sigma ^{3}} + \frac{0.25}{\sigma ^{4}}. $$

Bounding (2.48): Using the fact that \(|f(W^{(i)}+t)| \leq 1/\sigma \) with (2.44) gives

$$ \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} tf \bigl(W^{(i)}+t\bigr)K_{i}(t)\,dt \Biggr\vert \leq \frac{0.5}{\sigma ^{2}}. $$

Bounding (2.49): We first find an expression for the functions \(K_{i}\), \(1\leq i \leq n\). Since \(P(\xi _{i} = (1-p_{i})/\sigma )=p_{i}\) and \(P(\xi _{i} = -p_{i}/\sigma )=1-p_{i}\), we have

and hence

Now we consider the value of \(P(x-1/\sigma -t < W^{(i)} \leq x - t)\) as t varies over the interval \([-p_{i}/\sigma , (1-p_{i})/\sigma ]\). Since \(W^{(i)}\) takes values in the set \(A^{(i)}_{n} = \{(k-\mu ^{(i)})/\sigma : k\in \mathbb{Z}\cap [0, n-1] \}\), where \(\mu ^{(i)} = \mu -p_{i}\), it follows that the interval \((x-1/\sigma -t, x-t]\) contains exactly one element of \(A^{(i)}_{n}\) as it is of length \(1/\sigma \). Suppose \(x=(k-\mu )/\sigma \), \(k\in \mathbb{Z}\cap [0,n]\), and let \(x^{(i)} = (k - \mu ^{(i)})/\sigma \in A^{(i)}_{n}\). Then we have \(x=x^{(i)}-p_{i}/\sigma \) with \(|p_{i}|<1\), and so

$$\begin{aligned} P\bigl(x-1/\sigma -t < W^{(i)} \leq x - t\bigr) &= P \bigl(x^{(i)}-1/\sigma - p_{i}/ \sigma -t < W^{(i)} \leq x^{(i)} - p_{i}/\sigma - t\bigr) \\ &= \textstyle\begin{cases} P(W^{(i)} = x^{(i)} - 1/\sigma ), & t\in (-p_{i}/\sigma , (1-p_{i})/ \sigma ] \\ P(W^{(i)} = x^{(i)}), & t = -p_{i}/\sigma . \end{cases}\displaystyle \end{aligned}$$

Thus we have

$$\begin{aligned} \int _{-\frac{p_{i}}{\sigma}}^{\frac{1-p_{i}}{\sigma}}P\bigl(x-1/\sigma < W^{(i)}+t \leq x\bigr)\,dt &= \frac{1}{\sigma}P \bigl(W^{(i)} = x^{(i)} - 1/\sigma \bigr) \\ &= \frac{1}{\sigma}P \biggl(W^{(i)} = \frac{k-\mu ^{(i)} - 1}{\sigma} \biggr) = \frac{1}{\sigma}P\bigl(S^{(i)} + 1 =k\bigr), \end{aligned}$$

where \(S^{(i)} = S - X_{i}\). Thus

$$\begin{aligned} \sum_{i=1}^{n}\mathbb{E} \int _{-\infty}^{\infty}h_{x} \bigl(W^{(i)} + t\bigr)K_{i}(t)\,dt &= \sum _{i=1}^{n}\frac{p_{i}(1-p_{i})}{\sigma ^{2}}P \bigl(W^{(i)}=x^{(i)}-1/ \sigma \bigr) \\ &= \sum_{i=1}^{n}\frac{\sigma _{i}^{2}}{\sigma ^{2}}P \bigl(S^{(i)}+ 1 =k\bigr) \\ &= P\bigl(S^{(I)}+1=k\bigr), \end{aligned}$$

where I is a random index with distribution \(P(I=i) = \sigma _{i}^{2}/\sigma ^{2}\), \(1\leq i \leq n\), independent of the \(X_{i}\).


$$\begin{aligned} \sum_{i=1}^{n}\mathbb{E} \int _{-\infty}^{\infty}h_{x} \bigl(W^{(i)}+\xi _{i}\bigr)K_{i}(t)\,dt = \sum_{i=1}^{n}\mathbb{E} \xi _{i}^{2}P(W=x) = P(W=x) = P(S = k). \end{aligned}$$

Thus we may bound (2.49) as

$$\begin{aligned} \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} \bigl(h_{x} \bigl(W^{(i)} + \xi _{i}\bigr) - h_{x} \bigl(W^{(i)} + t\bigr) \bigr)K_{i}(t)\,dt \Biggr\vert &= \bigl\vert P(S=k) - P\bigl(S^{(I)} + 1 = k\bigr) \bigr\vert \\ & \leq \frac{1}{\sigma}P(S=k) \end{aligned}$$

with the inequality following from the proof of Theorem 1.1 of [10]. From (2.16) we get that

$$ \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} \bigl(h_{x} \bigl(W^{(i)} + \xi _{i}\bigr) - h_{x} \bigl(W^{(i)} + t\bigr) \bigr)K_{i}(t)\,dt \Biggr\vert \leq \frac{0.5}{\sigma ^{2}}. $$

Adding together our bounds for (2.46)–(2.49) in (2.8) together with the remainder R from Lemma 2.1 (a), we find that

$$ \sup_{k\in [0.n]\cap \mathbb{Z}}\triangle _{k} \leq \frac{3.23}{\sigma ^{2}} + \frac{1.35}{\sigma ^{3}} + \frac{0.25}{\sigma ^{4}} $$

as required.

3.2 Proof of Theorem 1.1

The uniform bound in Theorem 1.2 implies the nonuniform bound of Theorem 1.1 when \(|x| \leq 1.5\) as \(C_{1} > 3.15+7.39+4.5=15.04\), \(C_{2} > 12.03\), and \(C_{3} > 1.54\) while \(3.23e^{1.5} < 14.5\), \(1.35e^{1.5} < 6.1\), and \(0.25e^{1.5} < 1.2\). Thus we may assume that \(|x| > 1.5\) so that part (b) of Lemmas 2.9 and 2.10 apply.

Bounding (2.46): As in the uniform case, we have from (2.41)

$$\begin{aligned} & \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty }\mathbb{E} W^{(i)} \bigl[f\bigl(W^{(i)} + \xi _{i}\bigr) - f \bigl(W^{(i)} + t\bigr)\bigr]K_{i}(t)\,dt \Biggr\vert \ \\ &\quad \leq \frac{1}{\sigma}\sum_{i=1}^{n} \bigl\vert \mathbb{E} W^{(i)}f'(\bar{W}) \bigr\vert \int _{-\infty}^{\infty}K_{i}(t)\,dt \\ &\quad \leq \bigl(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 1.35 \bigr) \frac{e^{- \vert x \vert }}{\sigma ^{2}} + \bigl(3e^{ \frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 2.06 \bigr) \frac{e^{- \vert x \vert }}{\sigma ^{3}}. \end{aligned}$$

Bounding (2.47): As in the uniform case, we have, with a random variable strictly between \(W^{(i)} - p_{i}/\sigma \) and \(W^{(i)} +( 1-p_{i})/\sigma \), that

$$\begin{aligned} & \sum_{i=1}^{n}\mathbb{E} \int _{-\infty}^{\infty}\xi _{i}f \bigl(W^{(i)}+ \xi _{i}\bigr)K_{i}(t)\,dt \\ \end{aligned}$$
$$\begin{aligned} &\quad \leq \frac{1}{4\sigma ^{2}}\sum_{i=1}^{n} \bigl\vert \mathbb{E}f'(\bar{W}) \bigr\vert \int _{-\infty}^{\infty}K_{i}(t)\,dt \\ &\quad \leq \bigl(0.75e^{\frac{7}{3\sigma}} + 0.25e^{\frac{8}{7} + \frac{1}{\sigma}} + 0.52\bigr) \frac{e^{- \vert x \vert }}{\sigma ^{3}} + \bigl(0.75e^{ \frac{7}{3\sigma}} + 0.25e^{\frac{8}{7} + \frac{1}{\sigma}} \bigr) \frac{e^{- \vert x \vert }}{\sigma ^{4}} \end{aligned}$$

by (2.39).

Bounding (2.48): As in the uniform case, but now applying (2.11) with (2.44), we obtain

$$\begin{aligned} & \Biggl\vert \sum_{i=1}^{n} \int _{-\infty}^{\infty}\mathbb{E} tf \bigl(W^{(i)}+t\bigr)K_{i}(t)\,dt \Biggr\vert \\ &\quad \leq \bigl(1.5e^{\frac{7}{3\sigma}} + 0.5e^{\frac{8}{7}+\frac{1}{\sigma}}\bigr) \frac{e^{- \vert x \vert }}{\sigma ^{2}} + 0.95e^{\frac{1}{\sigma}}\bigl(1 + e^{ \frac{1}{\sigma}}\bigr) \frac{e^{- \vert x \vert }}{\sigma ^{3}}. \end{aligned}$$

Bounding (2.49): As in the uniform case, but now applying (2.29), we have

$$\begin{aligned} \sum_{i=1}^{n}\mathbb{E} \int _{-\infty}^{\infty} \bigl(h_{x} \bigl(W^{(i)} + \xi _{i}\bigr) - h_{x} \bigl(W^{(i)} + t\bigr) \bigr)K_{i}(t)\,dt \leq \frac{1}{\sigma}P(S=k) \leq 1.8\bigl(1+e^{1/\sigma}\bigr) \frac{e^{-|x|}}{\sigma ^{2}}. \end{aligned}$$

Adding together our bounds for (2.46)–(2.49) in (2.9) together with the remainder R from Lemma 2.1 (b) and using that \(e^{2/\sigma} \leq 0.87e^{7/3\sigma}\) for \(\sigma ^{2} \geq 5\), we find

$$ \triangle _{k} \leq \frac{e^{-|x|}}{\sigma ^{2}} \biggl(C_{1} + \frac{C_{2}}{\sigma} + \frac{C_{3}}{\sigma ^{2}} \biggr), $$


$$\begin{aligned} &C_{1} = 3.15 + 7.39e^{\frac{1}{\sigma}} + 4.5e^{\frac{7}{3\sigma}}, \\ &C_{2} = 2.58 + 4.87e^{\frac{1}{\sigma}} + 4.58e^{\frac{7}{3\sigma}}, \\ &C_{3} = 0.79e^{\frac{1}{\sigma}} + 0.75e^{\frac{7}{3\sigma}}, \end{aligned}$$

completing the proof.

4 Proofs of auxiliary results

4.1 Proof of Lemma 2.9


Throughout the proof we set \(A_{1} = (-\infty , x-1/\sigma ]\), \(A_{2} = (x-1/\sigma , x]\), and \(A_{3} = (x, \infty )\). We will bound \(\mathbb{E}f'(\bar{W})\) in two steps, first considering the case where \(\bar{W} \in A_{2}\) and then \(\bar{W} \notin A_{2}\). We also use the facts from Lemma 2.6 that \(f'(w) \leq 0\) when \(w \notin A_{2}\) and \(f'(w) \geq 0\) when \(w \in A_{2}\).

(a) Case 1: \(\bar{W} \in A_{2}\).

When \(\bar{W} \in A_{2}\) we have \(0 \leq f'(\bar{W}) \leq 1\) and \(W^{(i)} = x - (1-p_{i})/\sigma \). To see this latter fact, recall that \(W^{(i)}\) takes values in the set \(\mathcal{A}_{n}^{(i)} = \mathcal{A}_{n} + p_{i}/\sigma \), i.e., the support of \(W^{(i)}\) equals that of W translated by \(p_{i}/\sigma \). Thus, for example, we cannot have \(W^{(i)} = x + p_{i}/\sigma \) since then would lie in the interval \((x, x+1/\sigma )\) contradicting \(\bar{W} \in A_{2}\). From (2.17) we have that


and we note that this holds for any \(x \in \mathcal{A}_{n}\).

Case 2: \(\bar{W} \in A_{1}\cup A_{3}\).

Subcase 2.1: \(x \geq 1/\sigma \).

From Lemma 2.8 (a) and (b) and the fact that \(f'(w) < 0\) for \(w\in A_{1}\cup A_{3}\), we have that \(f'(w) \in (-|w|/\sigma - Nh, 0]\) when \(w\in A_{1}\cup A_{3}\). Applying the Cauchy–Schwarz inequality gives \(\mathbb{E}|W| \leq (\mathbb{E}W^{2})^{1/2} =1\), and so \(\mathbb{E}|\bar{W}| \leq \mathbb{E}|W| + 1/\sigma \leq 1 + 1/\sigma \). Also, as \(Nh \leq (\sigma \sqrt{2\pi})^{-1} < 0.4/\sigma \), we have


Subcase 2.2: \(0 \leq x < 1/\sigma \).

In this case Lemma 2.8 (a) and (c) provide a tighter bound on \(|f'(w)|\), \(w\in A_{1}\cup A_{3}\), than in Subcase 2.1 so that (4.2) still holds.

Subcase 2.3: \(x < 0\).

In this case applying Lemma 2.8 (d) shows that \(f'(w) \in (-|w|/\sigma - Nh, 0]\) for \(w\in A_{1}\cup A_{3}\), and the result follows in the same way as when \(x \geq 0\).

Thus from each subcase we see that (4.1) and (4.2) hold for all \(x\in \mathcal{A}_{n}\), which gives the result.

(b) First assume that \(x \geq 1/\sigma \). In slight contrast to the proof of part (a) we now consider the contributions to \(\mathbb{E}f'(\bar{W})\) when is in the sets \((x-1/\sigma , x]\), \((-\infty , \frac{3}{4}(x-1/\sigma ))\cup (x,\infty )\), and \((\frac{3}{4}(x-1/\sigma ), x-1/\sigma ]\). The sets \(A_{1}\), \(A_{2}\), and \(A_{3}\) are as in part (a).

Case 1: \(\bar{W} \in (x-1/\sigma , x] = A_{2}\).

In this case we have from Lemma 2.6 that \(0 \leq f'(\bar{W}) \leq 1\), and as in the proof of part (a), \(W^{(i)} = x-(1-p_{i})/\sigma \). Thus, for all \(x \in \mathcal{A}_{n}\),


by (2.30).

Case 2: \(\bar{W} \in (-\infty , \frac{3}{4}(x-1/\sigma ))\cup (x,\infty )\).

By Lemma 2.8 parts (a), (b)(i) and (b)(ii) together with the fact that \(f'(w) \leq 0\) when \(w \notin A_{2}\), we have that \(f'(w) \in [-e^{\frac{8}{7} + \frac{1}{\sigma}}|w|e^{-x}\sigma ^{-1}-Nh, 0]\) for \(w \in (-\infty , \frac{3}{4}(x-1/\sigma )]\cup (x,\infty )\). Using this together with the fact that \(\mathbb{E}|\bar{W}| \leq \mathbb{E}|W|+ 1/\sigma \leq 1+1/\sigma \) and Lemma 2.7 (a), we get


where \(A = (-\infty , \frac{3}{4}(x-1/\sigma )]\cup (x,\infty )\).

Case 3: \(\bar{W} \in (\frac{3}{4}(x-1/\sigma ), x-1/\sigma ]\).

In this case from Lemma 2.8 (b)(iii),

Now, by Holder’s inequality, we have for each \(p\in \mathbb{N}\) that

Letting \(p\to \infty \) and applying (2.19) and (2.20), we find

and so, again using that \(f'(w) \leq 0\) when \(w \notin A_{2}\), we have


Combining (4.4) and (4.5) we get


Since \(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 2.06> 1.9e^{1/ \sigma}(1+e^{1/\sigma})\) when \(\sigma \geq \sqrt{5}\), from (4.6) and (4.3) we see that

$$ \bigl\vert \mathbb{E} f'(\bar{W}) \bigr\vert \leq \bigl(3e^{\frac{7}{3\sigma}} + e^{ \frac{8}{7} + \frac{1}{\sigma}} + 2.06\bigr)\frac{e^{-x}}{\sigma} + \bigl(3e^{ \frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}}\bigr) \frac{e^{-x}}{\sigma ^{2}}. $$

The result follows in a similar way when \(x \leq -1/\sigma \). □

4.2 Proof of Lemma 2.10


As in the proof of Lemma 2.9, we let \(A_{1} = (-\infty , x-1/\sigma ]\), \(A_{2} = (x-1/\sigma , x]\) and \(A_{3} = (x, \infty )\).

(a) We first consider the case where \(x \geq 1/\sigma \). For each \(p\in \mathbb{N}\), we have, by Hölder’s inequality, that

$$ \bigl\vert \mathbb{E}W^{(i)}f'(\bar{W}) \bigr\vert \leq \bigl(\mathbb{E} \bigl\vert W^{(i)} \bigr\vert ^{2p} \bigr)^{\frac{1}{2p}} \bigl(\mathbb{E} \bigl\vert f'(\bar{W}) \bigr\vert ^{\frac{2p}{2p-1}} \bigr)^{\frac{2p-1}{2p}}. $$

We will bound the second factor appearing on the right of (4.7) separately for \(\bar{W} \notin A_{2}\) and \(\bar{W} \in A_{2}\), starting with the case \(\bar{W} \notin A_{2}\).

By Lemma 2.8 parts (a) and (b)(iii) together with the fact that \(Nh \leq (\sigma \sqrt{2\pi})^{-1} \leq 0.4/\sigma \), we have that \(|f'(\bar{W})| \leq |\bar{W}|/\sigma + 0.4/\sigma \) when \(\bar{W} \in A_{1} \cup A_{3}\). Thus, using that \((a+b)^{q} \leq 2^{q-1}(a^{q} + b^{q})\) whenever \(a, b > 0\) and \(q \geq 1\) together with \(|\bar{W}| \leq |W| + 1/\sigma \), we have


where we used that \(\mathbb{E}|W|^{2p/(2p-1)} \leq (\mathbb{E}|W|^{2})^{ \frac{2p}{2(2p-1)}} = 1\).

Now, when \(\bar{W} \in A_{2}\) we have \(|f'(\bar{W})| \leq 1\) and \(W^{(i)} = x - (1-p_{i})/\sigma \). The latter fact follows in a similar way as the case \(\bar{W} \in A_{2}\) in the proof of Lemma 2.9. For example, if \(\bar{W} \in A_{2}\) then we cannot have \(W^{(i)} = x + p_{i}/\sigma \) as then \(W^{(i)} + t \in [x, x+1/\sigma ]\) and \(W \in \{x, x+1/\sigma \}\), and it is impossible for to be strictly between W and \(W^{(i)} + t \) while at the same time \(\bar{W} \in A_{2}\). Similarly, we see that it is impossible for \(W^{(i)} = x - 2/\sigma + p_{i}/\sigma \). Thus, we have from (2.15) that for all \(x \in \mathcal{A}_{n}\)


From (4.8) and (4.9) we have that

$$\begin{aligned} & \mathbb{E} \bigl\vert f'(\bar{W}) \bigr\vert ^{\frac{2p}{2p-1}} \\ &\quad \leq \frac{2^{1/(2p-1)}}{\sigma ^{2p/(2p-1)}} \biggl\{ 0.58\bigl(2^{ \frac{-1}{2p-1}}\bigr) \sigma ^{1/(2p-1)} + 2^{1/(2p-1)} \biggl(1 + \frac{1}{\sigma ^{2p/(2p-1)}} \biggr) + 0.4^{\frac{2p}{2p-1}} \biggr\} , \end{aligned}$$

and so

$$\begin{aligned} & \bigl(\mathbb{E} \bigl\vert f'(\bar{W}) \bigr\vert ^{\frac{2p}{2p-1}} \bigr)^{ \frac{2p-1}{2p}} \\ &\quad \leq \frac{2^{1/2p}}{\sigma} \biggl\{ 0.58\bigl(2^{\frac{-1}{2p-1}}\bigr) \sigma ^{1/(2p-1)} + 2^{1/(2p-1)} \biggl(1 + \frac{1}{\sigma ^{2p/(2p-1)}} \biggr) + 0.4^{\frac{2p}{2p-1}} \biggr\} ^{ \frac{2p}{2p-1}} \end{aligned}$$

and using this in (4.7) and letting \(p\to \infty \) gives (2.40).

As in the proof of Lemma 2.9 (a), we note that when \(0 \leq x < 1/\sigma \), then Lemma 2.8 (a) and (c) provide a tighter bound on \(|f'(w)|\), \(w\in A_{1}\cup A_{3}\). This together with the fact that (4.9) holds for all \(x \in \mathcal{A}_{n}\) gives the result when \(0 \leq x < 1/\sigma \). The case \(x < 0\) is dealt with in a similar way using part (d) of Lemma 2.8.

(b) Our strategy is slightly different than for part (a). We will consider the contributions to \(\mathbb{E}W^{(i)}f'(\bar{W})\) when is in \(A_{1}\), \(A_{2}\), and \(A_{3}\). As \(f'(\bar{W})\) is positive when \(\bar{W}\in A_{2}\) and negative otherwise, together with the fact that \(|\bar{W}-W^{(i)}| \leq 1/\sigma \), we will be able to keep track of the signs of the various contributions and obtain some partial cancellation that would not be possible with a simple use of Hölder’s inequality as in part (a).

We first assume that \(x > 1.5\).

Case 1: \(\bar{W} \in A_{3}\).

In this case \(\bar{W}> x > 1.5\) and as \(|\bar{W} - W^{(i)}| < 1/\sigma \leq 1/\sqrt{5}\), we also have \(W^{(i)} > 0\) and hence \(W^{(i)}f'(\bar{W}) < 0\). Thus by Lemma 2.8 (a)

and so


Case 2: \(\bar{W} \in A_{1}\).

We write \(A_{1} = (-\infty ,0) \cup [0, \frac{3}{4} (x-1/\sigma ) ] \cup (\frac{3}{4} (x-1/\sigma ), x-1/\sigma ]\) and consider the contribution to from each set. For \(\bar{W} \in (-\infty ,0)\) we have that \(W^{(i)} \in (-\infty , 1/\sigma )\) and


and the first term on the right of (4.11) is positive and the second negative. Now, from Lemma 2.8 (b)(i) we have that


and so using the fact that \(|\bar{W}-W^{(i)}| \leq 1/\sigma \) gives

$$\begin{aligned} & \leq \frac{Nh}{2} + \frac{Nh}{\sigma}. \end{aligned}$$

Using the bound for Nh from Lemma 2.7 (a), we get


For the second term in (4.11) we have

Thus, as the second term in (4.11) is negative, we have

and so together with (4.15) this implies


Now suppose that \(\bar{W}\in [0, \frac{3}{4} (x - 1/\sigma ) ]\) and write

with the first term negative and the second positive. Now applying Lemma 2.8 (b)(ii) we have

where we used that \(\mathbb{E}|W^{(i)}\bar{W}| \leq \mathbb{E}|W^{(i)}|^{2} + \sigma ^{-1} \mathbb{E}|W^{(i)}|\). Hence,



which together with (4.17) implies that


Now consider \(\bar{W} \in (\frac{3}{4}(x-1/\sigma ), x-1/\sigma ]\). As \(x > 1.5\) and \(\sigma \geq \sqrt{5}\), we have \(\frac{3}{4}(x-1/\sigma ) > 0.78\), and thus \(W^{(i)} > 0\) and \(W^{(i)} f'(\bar{W}) < 0\) in this case.

We have by Lemma 2.8 (b)(iii) that for each \(p\in \mathbb{N}\),


Now, using (2.20) we find

$$ P \biggl(\frac{4\bar{W}}{3} > x-\frac{1}{\sigma} \biggr) \leq e^{-(x-1/ \sigma )}\mathbb{E}e^{4\bar{W}/3} \leq e^{\frac{7}{3\sigma}}e^{-x} \mathbb{E}e^{4W/3} \leq 3e^{\frac{7}{3\sigma}}e^{-x} $$

and using this together with (2.19) in (4.19) and letting \(p\to \infty \) gives


and combining this with our bounds (4.16) and (4.18), we find that

where we used that \(\sigma \geq \sqrt{5}\) in the last line. Together with our bound (4.10) we have


Case 3: \(\bar{W} \in A_{2}\).

In this case, \(W^{(i)} \geq x-2/\sigma \geq 0\), and so as \(f'(\bar{W})\in [0,1]\) we have \(W^{(i)}f'(\bar{W}) \geq 0\). As in part (a), \(W^{(i)} = x-1/\sigma +p_{i}/\sigma \) and so again using Holder’s inequality


Letting \(p\to \infty \), we get from (2.19) and (2.29) that


Since \(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 1.35 > 1.9e^{1/ \sigma}(1+e^{1/\sigma}) + 0.515\) and \(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} > 1.45e^{ \frac{8}{7} + \frac{1}{\sigma}}\) when \(\sigma \geq \sqrt{5}\), we have from (4.21) and (4.23) that

$$ \bigl\vert \mathbb{E} W^{(i)}f'(\bar{W}) \bigr\vert \leq \bigl(3e^{ \frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 1.35\bigr) \frac{e^{-x}}{\sigma} + \bigl(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 2.06\bigr) \frac{e^{-x}}{\sigma ^{2}}. $$

The case \(x < -1.5\) follows in a similar way. □

4.3 Proof of Lemma 2.11

We assume that \(x \geq 0\) and the sets \(A_{1}\), \(A_{2}\), and \(A_{3}\) are as in the proofs of the previous lemmas. From (2.32), we see that f is negative on \(A_{1}\) and positive on \(A_{3}\).

From (2.32), the tail bound (2.35), and Lemma 2.7, we have that


if \(x \geq 1\). Since \(e^{w^{2}/2}[1-\Phi (w)]\) is a decreasing function of w, we have that when \(x \in [0,1]\)


and from (4.24), we see this holds for all \(x \geq 0\).

Now we consider the case where \(W^{(i)}+t \in A_{1}\). Again, from (2.32) we have that

using that \(e^{-7t^{2}/32} \leq e^{8/7}e^{-t}\) for \(t \geq 0\) and from the tail bounds (2.36) we see that this also holds for \(W^{(i)} + t \leq 0\). Thus,


Now, by Markov’s inequality, (2.20), and the fact that \(|f(w)|\leq 1/\sigma \), we have

and thus


Finally, when \(W^{(i)}+t \in A_{2}\), we have that \(W^{(i)} = x -1/\sigma +p_{i}/\sigma \), and so


From (4.25), (4.26), (4.27), and (4.28) we see that the claimed bound holds. The case where \(x < 0\) is dealt with in a similar way.

Data Availability

No datasets were generated or analysed during the current study.


  1. Cox, D.R.: The continuity correction. Biometrika 57, 217–219 (1970)

    Article  Google Scholar 

  2. Emura, T., Liao, Y.-T.: Critical review and comparison of continuity correction methods: the normal approximation to the binomial distribution. Commun. Stat., Simul. Comput. 47, 2266–2285 (2017)

    Article  MathSciNet  Google Scholar 

  3. Chen, L.H.Y., Goldstein, L., Shao, Q.: Normal Approximation by Stein’s Method. Probability and Its Applications. Springer, Heidelberg (2011)

    Book  Google Scholar 

  4. Goldstein, L., Reinert, G.: Stein’s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7, 935–952 (1997)

    Article  MathSciNet  Google Scholar 

  5. Fang, X.: Discretized normal approximation by Stein’s method. Bernoulli 20, 1404–1431 (2014)

    Article  MathSciNet  Google Scholar 

  6. McDonald, D.: The local limit theorem: a historical perspective. JIRSS 4, 73–86 (2005)

    Google Scholar 

  7. Petrov, V.V.: Sums of Independent Random Variables. de Gruyter, Berlin, Boston (1975)

    Book  Google Scholar 

  8. Zolotukhin, A., Nagaev, S., Chebotarev, V.: On a bound of the absolute constant in the Berry-Esseen inequality for i.i.d. Bernoulli random variables. Mod. Stoch. Theory Appl. 5, 385–410 (2018)

    Article  MathSciNet  Google Scholar 

  9. Siripraparat, T., Neammanee, K.: A local limit theorem for Poisson binomial random variables. ScienceAsia 47, 111–116 (2021)

    Article  Google Scholar 

  10. Auld, G., Neammanee, K.: A non-uniform local limit theorem for Poisson binomial random variables via Stein’s method. J. Inequal. Appl. (2024)

  11. Chen, L.H.Y., Shao, Q.-M.: Normal approximation under local dependence. Ann. Probab. 32, 1985–2028 (2004)

    Article  MathSciNet  Google Scholar 

  12. Röllin, A.: Translated Poisson approximation using exchangeable pair couplings. Ann. Appl. Probab. 17, 1596–1614 (2007)

    Article  MathSciNet  Google Scholar 

  13. Barbour, A.D., Röllin, A., Ross, N.: Error bounds in local limit theorems using Stein’s method. Bernoulli 25, 1076–1104 (2019)

    Article  MathSciNet  Google Scholar 

  14. Röllin, A.: Symmetric and centered binomial approximation of sums of locally dependent random variables. Electron. J. Probab. 13, 756–776 (2008)

    MathSciNet  Google Scholar 

  15. Stein, C.: Approximate Computation of Expectations. Lecture Notes-Monograph Series. Institute of Mathematical Statistics, Hayward (1986)

    Book  Google Scholar 

  16. Goldstein, L.: Berry-Esseen bounds for combinatorial central limit theorems and pattern occurrences, using zero and size biasing. J. Appl. Probab. 42, 661–683 (2005)

    Article  MathSciNet  Google Scholar 

  17. El Karoui, N., Jiao, Y.: Stein’s method and zero bias transformation for CDO tranche pricing. Finance Stoch. 13, 151–180 (2009)

    Article  MathSciNet  Google Scholar 

  18. Barbour, A.D., Jensen, J.L.: Local and tail approximations near the Poisson limit. Scand. J. Stat. 16, 75–87 (1989)

    MathSciNet  Google Scholar 

  19. Andrews, G.E., Eriksson, K.: Integer Partitions. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

Download references


The authors are grateful to two anonymous reviewers for their comments that helped improve our paper. KN is grateful to the Centre of Excellence in Mathematics for financial support.


This research is supported by Ratchadapisek Somphot Fund for Postdoctoral Fellowship, Chulalongkorn University.

Author information

Authors and Affiliations



Both authors contributed equally to this research.

Corresponding author

Correspondence to Kritsana Neammanee.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Auld, G., Neammanee, K. Explicit constants in the nonuniform local limit theorem for Poisson binomial random variables. J Inequal Appl 2024, 67 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: