 Research
 Open access
 Published:
Explicit constants in the nonuniform local limit theorem for Poisson binomial random variables
Journal of Inequalities and Applications volume 2024, Article number: 67 (2024)
Abstract
In a recent paper the authors proved a nonuniform local limit theorem concerning normal approximation of the point probabilities \(P(S=k)\) when \(S=\sum_{i=1}^{n}X_{i}\) and \(X_{1},X_{2},\ldots ,X_{n}\) are independent Bernoulli random variables that may have different success probabilities. However, their main result contained an undetermined constant, somewhat limiting its applicability. In this paper we give a nonuniform bound in the same setting but with explicit constants. Our proof uses Stein’s method and, in particular, the Kfunction and concentration inequality approaches. We also prove a new uniform local limit theorem for Poisson binomial random variables that is used to help simplify the proof in the nonuniform case.
1 Introduction
Approximation of complicated distributions by simpler ones, on the basis of asymptotic theory, is a ubiquitous theme in probability and statistics. By far the most commonly used and wellknown such result is the central limit theorem (CLT), which ensures the weak convergence of appropriately normalized sums of independent random variables to a standard normal distribution. Statisticians frequently invoke the CLT to construct approximate confidence intervals and hypothesis tests. Due to their widespread use, it is clearly important to understand the quality of commonly applied probability approximations as a function of the sample size.
In order to improve the quality of the normal approximation of an integervalued random variable, it is standard to apply a continuity correction [1, 2]. Thus, if S is an integervalued random variable with mean μ and variance \(\sigma ^{2}\), and \(Z_{\mu , \sigma ^{2}} \sim N(\mu ,\sigma ^{2})\), a continuity corrected normal approximation of \(P(a \leq S \leq b)\), \(a, b \in \mathbb{Z}\), is \(P(a0.5 \leq Z_{\mu , \sigma ^{2}} \leq b+0.5)\).
Section 7.1 of [3] studies the accuracy of the normal approximation with continuity correction in the case where \(S=\sum_{i=1}^{n}X_{i}\) and \(X_{1},\ldots ,X_{n}\) are independent Bernoulli random variables with distributions \(P(X_{i}=1)=p_{i}=1P(X_{i}=0)\), \(p_{i}\in (0,1)\). In this case, S is said to have a Poisson binomial distribution. It is shown, in their Theorem 7.1, that if \(\sigma ^{2} = \text{Var}(S)\) then
where \(d_{TV}\) is the total variation distance and Y is an integervalued random variable with distribution
and \(Z\sim N(0,1)\). The random variable Y defined by (1.2) is said to have a discretized normal distribution with parameters μ and \(\sigma ^{2}\), written \(Y\sim N^{d}(\mu , \sigma ^{2})\). The proof of (1.1) uses Stein’s method and the zero bias coupling of [4]. [5] also considers discretized normal approximation via Stein’s method, giving bounds in the total variation distance for a wide range of examples including sums of locally dependent integervalued random variables.
In addition to considering central limit theorems and bounds in the total variation metric, we may analyze the accuracy of a local normal approximation of the point probabilities \(P(S=k)\), when S is integervalued, via the quantity
Proving local limit theorems for a general integervalued random variable is more delicate than proving central limit theorems as conditions are required to ensure that S does not concentrate on a lattice of span greater than 1. For example, if S is a sum of random variables that are concentrated on the even integers, then \(P(S=k)=0\) for odd k, and a normal approximation for S cannot be expected to be successful uniformly over \(\mathbb{Z}\). Consequently, local limit theorems have been comparatively less studied than central limit theorems although they came first in the historical development of probability [6].
Local limit theorems with uniform error bounds for sums of independent integervalued random variables are studied in Chap. 7 of [7] via Fourier analysis of characteristic functions. Sufficient conditions are given that ensure \(\sup_{k\in \mathbb{Z}}\triangle _{k} = O(1/\sigma ^{2})\), which is shown to be the optimal order for the error as a function of σ. However, explicit constants are not given in the error bounds, and much of the subsequent literature on local limit theorems presents uniform bounds for \(\triangle _{k}\) using the O symbol without explicit constants. More recently, [8] and [9] give uniform bounds for \(\triangle _{k}\) with explicit constants in the cases where S has a binomial and Poisson binomial distribution respectively.
Theorem 1.1 of [10] gives a nonuniform bound for \(\triangle _{k}\) when S has a Poisson binomial distribution. It was shown that if \(\sigma ^{2} \geq 1\), then for each \(k\in \mathbb{Z}\cap [0,n]\) we have
for some positive absolute constant C. The main novelty in this result is the nonuniformity in k, which makes explicit how \(\triangle _{k}\) decays the further k is into the tail of the distribution, an aspect lost in previous studies that only give uniform bounds.
The presence of an undetermined constant in (1.4) somewhat limits the results applicability. We remedy this here with the following explicit nonuniform bound.
Theorem 1.1
Let \(X_{1}, X_{2},\ldots , X_{n}\) be jointly independent Bernoulli random variables such that \(P(X_{i}=1) = 1P(X_{i} = 0)=p_{i} \in (0,1)\), and let \(S = \sum_{i=1}^{n}X_{i}\), \(\mu = \mathbb{E} S\), and \(\sigma ^{2} = \textit{Var}(S)\). If \(\sigma ^{2} \geq 5\), then for each \(k\in [0,n]\cap \mathbb{Z}\),
where
A trivial corollary of Theorem 1.1 is to give a value of the constant C appearing in (1.4), albeit under a slightly more restrictive condition on \(\sigma ^{2}\). For example, if \(\sigma ^{2} \geq 5, 25\) and 100, then one may take \(C=38.6, 22.7\) and 18.4 respectively.
Our proof of Theorem 1.1 uses Stein’s method, in particular the Kfunction and concentration inequality approaches, which are both discussed in Sect. 2. In [10], (1.4) was proved using the zero bias coupling [4]. The use of the Kfunction approach here allows for a more direct determination of constants as we avoid the need to prove an intermediate result concerning normal approximation of the zero biased random variable as in Theorem 3.1 of [10]. While the use of the Kfunction and concentration inequalities is a standard approach for proving quantitative Berry–Esseen bounds for sums of independent random variables [3, Chap. 3], and even for locally dependent random variables [11], this paper appears to be the first to use this approach to prove a local limit theorem. The zero bias coupling still plays a role when we derive concentration inequalities in Sect. 2.2. Although some previous studies have used Stein’s method to prove local limit theorems in more general settings, they consider only uniform bounds with different approximating distributions such as the translated Poisson [12, 13] or symmetric binomial [14] distributions.
It is easily checked that the normal density function appearing in Theorem 1.1 may be replaced by the discretized normal distribution (1.2) at the cost of different constants, as we make explicit in Lemma 2.2 of Sect. 2.1. However, the formulation in terms of the normal density is in keeping with the classical literature on local limit theorems such as [7].
In our proof of Theorem 1.1, we will also make use of the following uniform local limit theorem, which we prove using the same basic approach as for Theorem 1.1.
Theorem 1.2
Under the same setup as Theorem 1.1but assuming only \(\sigma ^{2} \geq 1\), we have
We will not consider the question of whether a bound of the form (1.4) is optimal. It is conceivable that one could obtain a faster decaying function of k than the exponential decay in our result. Proving optimality of any such bound would likely involve more sophisticated techniques than those used in this paper, and we leave it as an interesting open problem for future research.
The remainder of the paper is structured as follows. Section 2 covers the appropriate background material, in particular Stein’s method for local limit theorems as developed in [10] required for proving our main results, as well as giving some useful auxiliary lemmas. Section 3 gives the proof of our main result, Theorem 1.1, as well as that of Theorem 1.2, while proofs of some of the auxiliary results are given in Sect. 4.
2 Background and auxiliary results
In this section we cover the necessary prerequisites and give some auxiliary results required to prove Theorem 1.1. Section 2.1 introduces Stein’s method for normal approximation and the setup of [10] required for local limit theorems. Section 2.2 introduces the zero bias coupling, which is used to derive various concentration inequalities. Section 2.3 considers properties of the solution of the Stein equation and its derivative while, finally, Sect. 2.4 introduces the Kfunction, which is our main technique for manipulating the Stein equation and proving Theorem 1.1.
2.1 Stein’s method for local limit theorems
Let \(\mathcal{F}\) be the set of absolutely continuous functions \(f:\mathbb{R}\to \mathbb{R}\) such that \(f'\) exists almost everywhere and \(\mathbb{E}f'(Z) < \infty \) where, here and for the remainder of the paper, \(Z\sim N(0,1)\). Stein’s method for normal approximation revolves around the following characterization of the normal distribution. A random variable W has a standard normal distribution if and only if
For a proof of this characterization, see Lemma 1 of [15] or Lemma 2.1 of [3].
Now let \(f:=f_{h}\) be the bounded solution of the ordinary differential equation
with \(h\in \mathcal{H}\), where \(\mathcal{H}\) is a class of test functions that will be chosen depending on the problem at hand. For example, suppose that we wish to bound the Kolmogorov distance
between the random variable W, not necessarily normally distributed, and Z. The class of test functions in this case is
The Kolmogorov metric gives a uniform bound on the absolute differences of the distribution functions of W and Z and is the appropriate metric to consider in order to prove the Berry–Esseen theorem [3, Chap. 3]. Replacing w by W in (2.2) and taking expectations, we see that a bound on the Kolmogorov metric may be obtained from
Boundedness properties of f and \(f'\) together with various coupling techniques that have been developed [3, Chap. 2] mean that it is often more straightforward to obtain a bound from (2.5) than to work directly with (2.3).
In order to utilize the Stein framework for our problem, we let W be a normalized version of S with mean 0 and unit variance. In particular, we let \(\xi _{i} = (X_{i}p_{i})/\sigma \) and \(W=\sum_{i=1}^{n}\xi _{i}\) so that \(\mathbb{E}W=\mathbb{E} \xi _{i} = 0\), \(\text{Var} W = 1\), \(\text{Var} \xi _{i} = \sigma _{i}^{2}/\sigma ^{2}\) and W takes values in the set \(\mathcal{A}_{n} = \{(k\mu )/\sigma : k\in \mathbb{Z} \cap [0,n] \}\). The set of test functions we consider is
If \(h=h_{x}\in \mathcal{H}\) with \(x=(k\mu )/\sigma \), \(k\in \mathbb{Z} \cap [0,n]\), then we have \(\mathbb{E} h(W) = P(W=x) = P(S=k)\), and our problem is to bound
The next result quantifies \(\mathbb{E} h(Z)\) and verifies that \(\mathcal{H}\) defined in (2.6) is indeed an appropriate set of test functions for proving Theorem 1.1.
Lemma 2.1
Let \(x\in \mathcal{A}_{n}\) and . If \(\sigma ^{2} \geq 1\), then \(\mathbb{E} h(Z) = (\sigma \sqrt{2\pi})^{1}e^{x^{2}/2} + R\), where
Proof
Part (a) is just a restatement of Lemma 4.1 (a) in [10]. We note that (a) implies (b) if \(x \leq 1\) since in this case
Thus to prove (b), we may assume that \(x > 1\).
By the mean value theorem for integrals, we have that \(\mathbb{E} h(Z) = \sigma ^{1}\phi (c)\) for some \(c\in (x1/\sigma , x)\), where \(\phi (c) = (\sqrt{2\pi})^{1}e^{c^{2}/2}\). Since \(cx < 1/\sigma \), by the mean value theorem, \(\phi (c)\phi (x) < \sigma ^{1}\phi '(d)\) for some d between c and x, and thus \(d\in (x1/\sigma , x)\). Now write \(\phi (c) =\phi (x) + R_{1}\) with \(R_{1} < \sigma ^{1}\phi '(d)\), \(d\in (x1/\sigma , x)\), and since \(\mathbb{E} h(Z) = \sigma ^{1}\phi (c)\), we have
where \(R = R_{1}/\sigma \). Now \(\phi '(d) \leq 2.2(\sigma \sqrt{2\pi})^{1}e^{d}\), since \(de^{d^{2}/2} \leq 2.2e^{d}\) for all d. As \(d\in (x1/\sigma , x)\), we may write \(d = x + \delta \) with \(\delta \in (1/\sigma ,0)\). Now we consider the cases \(x > 1\) and \(x< 1\).
If \(x>1\), then \(x1/\sigma > 0\), and so \(d>0\) and \(d=d\). In this case then, \(\phi '(d) \leq 2.2(\sigma \sqrt{2\pi})^{1}e^{d} = 2.2(\sigma \sqrt{2\pi})^{1}e^{d} \leq 0.88e^{1/\sigma}\sigma ^{1}e^{x} \).
If \(x < 1\), then \(d=d\), and so \(\phi '(d) \leq 2.2(\sigma \sqrt{2\pi})^{1}e^{d} \leq 0.88 \sigma ^{1}e^{\delta}e^{x} \leq 0.88\sigma ^{1}e^{x}\). As \(R = R_{1}/\sigma \) and \(R_{1} < \sigma ^{1}\phi '(d)\), this completes the proof. □
As a consequence of Lemma 2.1, we have that for each \(x\in \mathcal{A}_{n}\)
and
where \(f := f_{x}\) is the bounded solution of
with and \(Nh = \mathbb{E}h_{x}(Z)\).
Our problem then is reduced to bounding \(\mathbb{E}\{f'(W)  Wf(W)\}\) with f the bounded solution of (2.10). Our approach to bounding this quantity is discussed in Sect. 2.4. Before we can deal with this problem, we need to acquire some further auxiliary results in Sects. 2.2 and 2.3.
We end this section by making explicit the fact that Theorems 1.1 and 1.2 imply analogous results with the discretized normal distribution replacing the normal density.
Lemma 2.2
Let Y have a discretized normal distribution with parameters μ and \(\sigma ^{2}\) as defined by (1.2). Then
If \(\sigma ^{2} \geq 5\), then
Proof
The proof follows in essentially the same way as that of Lemma 2.1, and we omit the details. □
2.2 Concentration inequalities via the zero bias coupling
In this section we derive some concentration inequalities for \(P(a \leq W \leq b)\) and give bounds for the point probabilities \(P(W=x)\), \(x\in \mathcal{A}_{n}\). We recall that if Y is a zero mean random variable with \(\text{Var}(Y) = \sigma ^{2}_{Y}\), then the random variable \(Y^{*}\) is said to have the Yzero biased distribution if
for all absolutely continuous functions f such that the above expectations exist. The notion of zero biasing was introduced in [4], where the existence of \(Y^{*}\) was established for any mean zero random variable Y. For further applications of the zero bias coupling beyond BerryEsseen bounds in the classical central limit theorem, see [16] and [17]. Throughout the remainder of this section, an asterisk * in the exponent of a random variable denotes a random variable with the corresponding zero biased distribution.
In our setting, from Lemma 2.1 of [4], \(W^{*}\) may be constructed on the same space as W by setting \(W^{*}= W  \xi _{I} + \xi _{I}^{*}\), where I is a random index with distribution \(P(I=i)=\sigma _{i}^{2}/\sigma ^{2}\), \(1\leq i \leq n\). It may be shown [3, p. 29] that \(\xi _{i}^{*}\) is uniformly distributed on \([p_{i}/\sigma , (1p_{i})/\sigma ]\), and thus, as \(\xi _{i}\) and \(\xi _{i}^{*}\) have the same support and \(WW^{*} = \xi _{I}\xi _{I}^{*}\), we have that
[10] use (2.12) to prove a nonuniform bound on the local normal approximation of \(W^{*}\) that forms one of the key steps in the proof of their Theorem 1.1. Here, we will prove concentration inequalities for \(W^{*}\) and use these together with (2.12) to obtain concentration inequalities for W.
Choosing the function f in (2.11) such that and \(f(\frac{a+b}{2}) = 0\), for \(a \leq b\), it is shown in Lemma 3.2 of [17] that for any random variable Y with \(\mathbb{E}(Y) = 0\) and \(\text{Var}(Y) = \sigma ^{2}_{Y}\),
We now use this to obtain uniform concentration inequalities for W and \(W^{(i)} = W  \xi _{i}\).
Lemma 2.3
For all \(a \leq b\), we have
Moreover, if \(\sigma ^{2} \geq 5\), we have
where \(W^{(i)} = W  \xi _{i}\).
Proof
From (2.12), (2.13) and the fact that \(\sigma ^{2}_{W} = \text{Var}(W) =1\), we have
which is (2.14). For (2.15) we have that
and so using this in (2.13) with \(W^{(i)}  W^{(i)*} \leq 1/\sigma \) gives
as required. □
Lemma 2.3 may be used to uniformly bound \(P(W=x)\), e.g., by writing \(P(W=x)=P(x\epsilon /2 \leq W \leq x+\epsilon /2)\) for small positive ϵ and letting \(\epsilon \to 0^{+}\). However, this approach gives a worse constant than that of [18], which we state in Lemma 2.4 below together with an analogous result for \(P(W^{(i)}=x)\). As before, \(\mathcal{A}_{n}\) denotes the support of W, and we will denote the support of \(W^{(i)}\) by \(\mathcal{A}_{n}^{(i)}\) so that \(\mathcal{A}_{n}^{(i)} = \{(k\mu ^{(i)})/\sigma : k \in [0, k1] \cap \mathbb{Z}\}\) where \(\mu ^{(i)}= \mu  p_{i}\).
Lemma 2.4
The following uniform bound holds:
Moreover if \(\sigma ^{2} \geq 1\) then
where \(S^{(i)} = S  X_{i}\).
Proof
The bound (2.16) is given in Lemma 1 from [18]. Now, as \(S^{(i)}\) is also a Poisson binomial random variable, we have from (2.16) and the fact that \(\sigma \geq 1\) that
for each \(k\in [0,n1]\cap \mathbb{Z}\), which implies (2.17). □
Lemma 2.5 and Corollary 2.1 below are nonuniform versions of Lemmas 2.3 and 2.4. Before stating these results, we recall from Lemma 3.3 of [10] that for each \(m\in \mathbb{N}\) we have the bound \(\mathbb{E}W^{2m} \leq p(2m)\), uniformly in n, where \(p(2m)\) is the number of partitions of 2m, i.e., the number of ways that 2m may be written as a sum of positive integers irrespective of order. Since \(p(2m)^{1/2m}\to 1\) as \(m\to \infty \) [19, Sect. 6.4], it follows that, uniformly in n,
The same bound holds when W is replaced by \(W^{(i)} = W  \xi _{i}\).
We also make use of the fact that if \(\sigma ^{2} \geq A^{2}\) then \(\xi _{i} \leq 1/A\) for each \(1\leq i \leq n\), and so by Lemma 8.1 in [3] with \(\alpha = 1/A\) and \(B^{2}=1\), we have that for each \(t > 0\)
In particular, letting \(t = 2m/(2m1)\) for \(m\in \mathbb{N}\), we find that, uniformly in n,
with the same bound holding when W is replaced by \(W^{(i)}\).
Lemma 2.5
If \(\sigma ^{2} \geq 5\) then for \(0 \leq a < b\), we have
and
Proof
Define the function \(g:\mathbb{R}\to \mathbb{R}\) by
for which we have \(g'(w) \geq 0\), for \(w\in \mathbb{R}\) and \(g'(w) \geq e^{a}\), for \(w \in [a, b]\). We also have \(0 \leq g(w) \leq (ba)e^{w}\) for \(w\in \mathbb{R}\). It follows that
and
By Holder’s inequality, for all \(m\in \mathbb{N}\),
Letting \(m\to \infty \) and applying (2.19) and (2.21) with \(A=\sqrt{5}\), we get
and hence
Now, as \(\mathbb{E}g'(W^{*}) = \mathbb{E}Wg(W)\), we get from (2.24) and (2.25)
Since \(WW^{*} \leq 1/\sigma \), we have, assuming \(a \geq 1/\sigma \), that
The assumption \(a \geq 1/\sigma \) was required in (2.27) to ensure that (2.26) could be applied. If \(0 \leq a < 1/\sigma \), then \(a\in [0,1/\sqrt{5})\), and the result follows from Lemma 2.3 since
The proof of (2.23) follows in the same way except that, as \(\text{Var}(W^{(i)}) \neq 1\), prior to (2.26) we must use that \(\sigma _{W^{(i)}}^{2}\mathbb{E}g'(W^{(i)*}) = \mathbb{E}W^{(i)}g(W^{(i)})\) and the fact that \(\sigma _{W^{(i)}}^{2} \geq (\sqrt{19/20})^{2} = 0.95\), as shown in the proof of Lemma 2.3. □
Remark 1
It is clear from the proof of Lemma 2.5 that (2.22) holds more generally whenever \(W=\sum_{i=1}^{n}\xi _{i}\), \(\text{Var}(W)=1\) with \(\mathbb{E} \xi _{i}=0\) and \(\xi _{i} \leq 1/\sqrt{5}\). Thus, if a and b are both negative, we have
Arguing as in the paragraph prior to Lemma 2.4, we obtain from Lemma 2.5 the following nonuniform bound on the point probabilities \(P(W=x)\), \(x\in \mathcal{A}_{n}\) and \(P(W^{(i)}=x)\), \(x\in \mathcal{A}_{n}^{(i)}\).
Corollary 2.1
For \(x\in \mathcal{A}_{n}\) with \(x=(k\mu )/\sigma \), \(k\in [0,n]\cap \mathbb{Z}\), we have
Similarly, for \(x\in \mathcal{A}_{n}^{(i)}\), we have
2.3 The Stein equation
In this section we consider the properties of the function f, which is the bounded solution to the Stein equation (2.10). For the remainder of the paper, unless otherwise stated, it may be assumed that \(x\in \mathcal{A}_{n}\) where \(\mathcal{A}_{n}\) is as defined in Sect. 2.1. We first recall the following basic properties of f from Lemma 3.2 in [10].
Lemma 2.6
Let \(f:=f_{x}\) be the bounded solution of (2.10). Then

(a)
\(0\leq f'(w) \leq 1\), \(w\in (x1/\sigma , x]\),

(b)
f is continuous, increasing on the interval \(w\in (x1/\sigma , x]\) and decreasing otherwise,

(c)
if \(\sigma ^{2} \geq 1\), we have
$$ \bigl\vert f(w) \bigr\vert \leq \frac{1}{\sigma}, \quad w\in \mathbb{R}. $$(2.31)
It was also shown in [10] that the term Nh appearing in (2.10) is bounded above by \(C\sigma ^{1}e^{x}\) for some absolute positive constant C. We now quantify the value of C.
Lemma 2.7
Let \(Nh = P(x1/\sigma < Z \leq x)\), where \(Z\sim N(0,1)\). Then,

(a)
\(Nh \leq \frac{1.03e^{x}}{\sigma}\) if \(\sigma ^{2} \geq 5\),

(b)
\(Nh \leq \frac{0.4e^{x^{2}/2}}{\sigma}\) for all \(\sigma > 0\) when \(x \leq 0\).
Proof
For (a), we divide the proof into three cases according to whether \(x > 1/\sqrt{5}\), \(x \leq 1/\sqrt{5}\) or \(x < 1/\sqrt{5}\).
Case 1: \(x > 1/\sqrt{5}\). In this case, since \(\sigma \geq \sqrt{5}\), we have \(x1/\sigma > 0\) and \(Nh \leq (\sigma \sqrt{2\pi})^{1} e^{\frac{1}{2}(x\frac{1}{\sigma})^{2} }\). Since \(e^{t^{2}/2} \leq e^{1/2}e^{t}\), when \(t>0\), we have \(Nh \leq (\sigma \sqrt{2\pi})^{1}e^{1/2}e^{(x1/\sigma )} = e^{ \frac{1}{2}+\frac{1}{\sigma}} (\sqrt{2\pi})^{1}\frac{e^{x}}{\sigma}\). Since \((\sqrt{2\pi})^{1}e^{1/2 + 1/\sqrt{5}} < 1.03\), (a) holds when \(x > 1/\sqrt{5}\).
Case 2: \(x \leq 1/\sqrt{5}\). Since \(Nh \leq (\sigma \sqrt{2\pi})^{1}\), we have for all \(x\leq 1/\sqrt{5}\) that
which holds for all \(\sigma > 0\).
Case 3: \(x < 1/\sqrt{5}\). In this case we have
valid for all \(\sigma > 0\), where we used the fact that \(e^{x^{2}/2} \leq e^{1/2}e^{x}\). This completes the proof of (a).
For (b), noting that the working in Case 3 above holds whenever \(x \leq 0\), we have \(Nh \leq (\sigma \sqrt{2\pi})^{1}e^{x^{2}/2} \leq 0.4\sigma ^{1}e^{x^{2}/2}\). □
We recall, from equation (3.6) of [10], that the unique bounded solution f, of (2.10), may be written as
and so
and
We will not need to make use of the explicit expression for \(f'(w)\) for \(w \in (x  \frac{1}{\sigma}, x]\); it will suffice to know that \(0 \leq f'(w) \leq 1\) in this case. For \(w\notin (x  \frac{1}{\sigma}, x]\), we know from Lemma 2.6 (b) that \(f'(w) < 0\) and together with Lemma 2.4 in [3] we have \(2 \leq f'(w) \leq 0\) in this case. Our next result, Lemma 2.8, gives more detailed bounds on \(f'(w)\). We first recall the standard Gaussian tail bounds [3, p. 37 & 38]
and
Lemma 2.8
Let f be the bounded solution of (2.10).
(a) If \(x \geq 0\) then
(b) If \(x > 1/\sigma \) then
(c) If \(0 \leq x \leq 1/\sigma \) then
(d) If \(x < 0\) then
Proof
(a) is immediate from (2.33) together with the tail bounds (2.35).
For (b), when \(w \leq 0\), we have from (2.34) that \(f'(w) = \sqrt{2\pi}Nhwe^{w^{2}/2}[1\Phi (w)]  Nh\) and (i) again follows from (2.35).
For (ii), as \(Nh \leq (\sigma \sqrt{2\pi})^{1}e^{\frac{1}{2}(x\frac{1}{\sigma})^{2}}\) in this case and \(e^{w^{2}/2} \leq e^{\frac{9}{32}(x\frac{1}{\sigma})^{2}}\), we have
as \(e^{7t^{2}/32} \leq e^{8/7}e^{t}\) for \(t \geq 0\). The result then follows from (2.34).
For (iii), use (2.34) together with \(Nh \leq (\sigma \sqrt{2\pi})^{1}e^{\frac{1}{2}(x\frac{1}{\sigma})^{2}}\) and \(e^{w^{2}/2} \leq e^{\frac{1}{2}(x\frac{1}{\sigma})^{2}}\) for \(w \in (0, x1/\sigma )\).
For (c), if \(w \leq x1/\sigma \) then \(w \leq 0\), and we may write, from (2.34), \(f'(w) = \sqrt{2\pi}Nhw e^{w^{2}/2}[1\Phi (w)]  Nh\), and we again get the result from (2.35).
(d)(i) follows in essentially the same way as (b)(ii) but now using (2.33) with \(Nh \leq (\sigma \sqrt{2\pi})^{1}e^{x^{2}/2}\) and \(e^{7t^{2}/32} \leq e^{8/7}e^{t}\).
(d)(ii) follows in essentially the same way as (b)(iii) but using (2.33).
For (d)(iii), if \(w \leq x1/\sigma \) then \(w < 0\), and we get the result in this case as in (c), while for \(w > 0\), we get the result as in (a). □
We now use Lemma 2.8 to give \(O(1/\sigma )\) bounds on \(\mathbb{E}f'(\bar{W})\) when W̄ is a random variable that is sufficiently close to W.
Lemma 2.9
If \(\sigma ^{2} \geq 1\) and W̄ is a random variable strictly between \(W^{(i)}  p_{i}/\sigma \) and \(W^{(i)} + (1p_{i})/\sigma \), then
Furthermore, if \(\sigma ^{2} \geq 5\) and \(\vert x \vert \geq 1/\sigma \), then
Proof
The proof is given in Sect. 4.1. □
We now give our final two auxiliary results required to prove Theorems 1.1 and 1.2.
Lemma 2.10
If \(\sigma ^{2} \geq 1\) and W̄ is a random variable strictly between W and \(W^{(i)}+t\) with \(t\in [p_{i}/\sigma , (1p_{i})/\sigma ]\), then
If we also have \(\sigma ^{2} \geq 5\) and \(\vert x \vert > 1.5\), then
Proof
The proof is given in Sect. 4.2. □
Lemma 2.11
If \(\sigma ^{2} \geq 5\) and \(t \in (p_{i}/\sigma , (1p_{i})/\sigma ]\), then
Proof
The proof is given in Sect. 4.3. □
2.4 The Kfunction
As discussed in Sect. 2.1, our problem reduces to bounding \(\mathbb{E}\mathbb{\{}f'(W)Wf(W)\}\), where f is the bounded solution of the Stein equation (2.10). To this end, define the functions \(K_{i}\), \(1\leq i \leq n\), by
By Fubini’s theorem we find that
and so
since
Writing \(W^{(i)} = W\xi _{i}\), it is shown in Sect. 2.3.1 of [3] that
As f is the solution of (2.10), we may decompose the righthand side of (2.45) as
Since \(\xi _{i} \in \{p_{i}/\sigma , (1p_{i})/\sigma \}\), we see that \(K_{i}(t)\neq 0\) requires that \(t \in [p_{i}/\sigma , (1p_{i})/\sigma ]\). Thus in bounding (2.46)–(2.49), for each i, \(1 \leq i \leq n\), we may restrict our attention to \(t \in [p_{i}/\sigma , (1p_{i})/\sigma ]\) as otherwise the integrands are zero. In particular this condition implies that \(t < 1/\sigma \) and \(\xi _{i}  t \leq 1/\sigma \).
3 Proofs of main results
We now give our proofs of Theorems 1.1 and 1.2, starting with Theorem 1.2, which is then used to simplify the proof of Theorem 1.1. We use the Kfunction approach and notation from Sect. 2.4. Our problem is to bound the four terms (2.46)–(2.49), and we will consider each term in turn.
3.1 Proof of Theorem 1.2
Bounding (2.46): For (2.46), there is a random W̄ between \(W^{(i)} + \xi _{i}\) and \(W^{(i)} + t\) such that \(f(W^{(i)} + \xi _{i})  f(W^{(i)} + t) = f'(\bar{W})(\xi _{i}  t)\). Since for each i we only need to consider t such that \(\xi _{i}  t \leq 1/\sigma \), we may bound (2.46) as
by (2.40).
Bounding (2.47): As \(\xi _{i} = (1p_{i})/\sigma \) with probability \(p_{i}\) and \(\xi _{i} = p_{i}/\sigma \) with probability \(1p_{i}\), we have that
for some random variable W̄ strictly between \(W^{(i)} + (1p_{i})/\sigma \) and \(W^{(i)}  p_{i}/\sigma \). Now, by Lemma 2.9 (a) and the fact that \(p(1p) \in (0,1/4]\) when \(p\in (0,1)\), (2.47) may be bounded as
Bounding (2.48): Using the fact that \(f(W^{(i)}+t) \leq 1/\sigma \) with (2.44) gives
Bounding (2.49): We first find an expression for the functions \(K_{i}\), \(1\leq i \leq n\). Since \(P(\xi _{i} = (1p_{i})/\sigma )=p_{i}\) and \(P(\xi _{i} = p_{i}/\sigma )=1p_{i}\), we have
and hence
Now we consider the value of \(P(x1/\sigma t < W^{(i)} \leq x  t)\) as t varies over the interval \([p_{i}/\sigma , (1p_{i})/\sigma ]\). Since \(W^{(i)}\) takes values in the set \(A^{(i)}_{n} = \{(k\mu ^{(i)})/\sigma : k\in \mathbb{Z}\cap [0, n1] \}\), where \(\mu ^{(i)} = \mu p_{i}\), it follows that the interval \((x1/\sigma t, xt]\) contains exactly one element of \(A^{(i)}_{n}\) as it is of length \(1/\sigma \). Suppose \(x=(k\mu )/\sigma \), \(k\in \mathbb{Z}\cap [0,n]\), and let \(x^{(i)} = (k  \mu ^{(i)})/\sigma \in A^{(i)}_{n}\). Then we have \(x=x^{(i)}p_{i}/\sigma \) with \(p_{i}<1\), and so
Thus we have
where \(S^{(i)} = S  X_{i}\). Thus
where I is a random index with distribution \(P(I=i) = \sigma _{i}^{2}/\sigma ^{2}\), \(1\leq i \leq n\), independent of the \(X_{i}\).
Also,
Thus we may bound (2.49) as
with the inequality following from the proof of Theorem 1.1 of [10]. From (2.16) we get that
Adding together our bounds for (2.46)–(2.49) in (2.8) together with the remainder R from Lemma 2.1 (a), we find that
as required.
3.2 Proof of Theorem 1.1
The uniform bound in Theorem 1.2 implies the nonuniform bound of Theorem 1.1 when \(x \leq 1.5\) as \(C_{1} > 3.15+7.39+4.5=15.04\), \(C_{2} > 12.03\), and \(C_{3} > 1.54\) while \(3.23e^{1.5} < 14.5\), \(1.35e^{1.5} < 6.1\), and \(0.25e^{1.5} < 1.2\). Thus we may assume that \(x > 1.5\) so that part (b) of Lemmas 2.9 and 2.10 apply.
Bounding (2.46): As in the uniform case, we have from (2.41)
Bounding (2.47): As in the uniform case, we have, with W̄ a random variable strictly between \(W^{(i)}  p_{i}/\sigma \) and \(W^{(i)} +( 1p_{i})/\sigma \), that
by (2.39).
Bounding (2.48): As in the uniform case, but now applying (2.11) with (2.44), we obtain
Bounding (2.49): As in the uniform case, but now applying (2.29), we have
Adding together our bounds for (2.46)–(2.49) in (2.9) together with the remainder R from Lemma 2.1 (b) and using that \(e^{2/\sigma} \leq 0.87e^{7/3\sigma}\) for \(\sigma ^{2} \geq 5\), we find
where
completing the proof.
4 Proofs of auxiliary results
4.1 Proof of Lemma 2.9
Proof
Throughout the proof we set \(A_{1} = (\infty , x1/\sigma ]\), \(A_{2} = (x1/\sigma , x]\), and \(A_{3} = (x, \infty )\). We will bound \(\mathbb{E}f'(\bar{W})\) in two steps, first considering the case where \(\bar{W} \in A_{2}\) and then \(\bar{W} \notin A_{2}\). We also use the facts from Lemma 2.6 that \(f'(w) \leq 0\) when \(w \notin A_{2}\) and \(f'(w) \geq 0\) when \(w \in A_{2}\).
(a) Case 1: \(\bar{W} \in A_{2}\).
When \(\bar{W} \in A_{2}\) we have \(0 \leq f'(\bar{W}) \leq 1\) and \(W^{(i)} = x  (1p_{i})/\sigma \). To see this latter fact, recall that \(W^{(i)}\) takes values in the set \(\mathcal{A}_{n}^{(i)} = \mathcal{A}_{n} + p_{i}/\sigma \), i.e., the support of \(W^{(i)}\) equals that of W translated by \(p_{i}/\sigma \). Thus, for example, we cannot have \(W^{(i)} = x + p_{i}/\sigma \) since then W̄ would lie in the interval \((x, x+1/\sigma )\) contradicting \(\bar{W} \in A_{2}\). From (2.17) we have that
and we note that this holds for any \(x \in \mathcal{A}_{n}\).
Case 2: \(\bar{W} \in A_{1}\cup A_{3}\).
Subcase 2.1: \(x \geq 1/\sigma \).
From Lemma 2.8 (a) and (b) and the fact that \(f'(w) < 0\) for \(w\in A_{1}\cup A_{3}\), we have that \(f'(w) \in (w/\sigma  Nh, 0]\) when \(w\in A_{1}\cup A_{3}\). Applying the Cauchy–Schwarz inequality gives \(\mathbb{E}W \leq (\mathbb{E}W^{2})^{1/2} =1\), and so \(\mathbb{E}\bar{W} \leq \mathbb{E}W + 1/\sigma \leq 1 + 1/\sigma \). Also, as \(Nh \leq (\sigma \sqrt{2\pi})^{1} < 0.4/\sigma \), we have
Subcase 2.2: \(0 \leq x < 1/\sigma \).
In this case Lemma 2.8 (a) and (c) provide a tighter bound on \(f'(w)\), \(w\in A_{1}\cup A_{3}\), than in Subcase 2.1 so that (4.2) still holds.
Subcase 2.3: \(x < 0\).
In this case applying Lemma 2.8 (d) shows that \(f'(w) \in (w/\sigma  Nh, 0]\) for \(w\in A_{1}\cup A_{3}\), and the result follows in the same way as when \(x \geq 0\).
Thus from each subcase we see that (4.1) and (4.2) hold for all \(x\in \mathcal{A}_{n}\), which gives the result.
(b) First assume that \(x \geq 1/\sigma \). In slight contrast to the proof of part (a) we now consider the contributions to \(\mathbb{E}f'(\bar{W})\) when W̄ is in the sets \((x1/\sigma , x]\), \((\infty , \frac{3}{4}(x1/\sigma ))\cup (x,\infty )\), and \((\frac{3}{4}(x1/\sigma ), x1/\sigma ]\). The sets \(A_{1}\), \(A_{2}\), and \(A_{3}\) are as in part (a).
Case 1: \(\bar{W} \in (x1/\sigma , x] = A_{2}\).
In this case we have from Lemma 2.6 that \(0 \leq f'(\bar{W}) \leq 1\), and as in the proof of part (a), \(W^{(i)} = x(1p_{i})/\sigma \). Thus, for all \(x \in \mathcal{A}_{n}\),
by (2.30).
Case 2: \(\bar{W} \in (\infty , \frac{3}{4}(x1/\sigma ))\cup (x,\infty )\).
By Lemma 2.8 parts (a), (b)(i) and (b)(ii) together with the fact that \(f'(w) \leq 0\) when \(w \notin A_{2}\), we have that \(f'(w) \in [e^{\frac{8}{7} + \frac{1}{\sigma}}we^{x}\sigma ^{1}Nh, 0]\) for \(w \in (\infty , \frac{3}{4}(x1/\sigma )]\cup (x,\infty )\). Using this together with the fact that \(\mathbb{E}\bar{W} \leq \mathbb{E}W+ 1/\sigma \leq 1+1/\sigma \) and Lemma 2.7 (a), we get
where \(A = (\infty , \frac{3}{4}(x1/\sigma )]\cup (x,\infty )\).
Case 3: \(\bar{W} \in (\frac{3}{4}(x1/\sigma ), x1/\sigma ]\).
In this case from Lemma 2.8 (b)(iii),
Now, by Holder’s inequality, we have for each \(p\in \mathbb{N}\) that
Letting \(p\to \infty \) and applying (2.19) and (2.20), we find
and so, again using that \(f'(w) \leq 0\) when \(w \notin A_{2}\), we have
Combining (4.4) and (4.5) we get
Since \(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 2.06> 1.9e^{1/ \sigma}(1+e^{1/\sigma})\) when \(\sigma \geq \sqrt{5}\), from (4.6) and (4.3) we see that
The result follows in a similar way when \(x \leq 1/\sigma \). □
4.2 Proof of Lemma 2.10
Proof
As in the proof of Lemma 2.9, we let \(A_{1} = (\infty , x1/\sigma ]\), \(A_{2} = (x1/\sigma , x]\) and \(A_{3} = (x, \infty )\).
(a) We first consider the case where \(x \geq 1/\sigma \). For each \(p\in \mathbb{N}\), we have, by Hölder’s inequality, that
We will bound the second factor appearing on the right of (4.7) separately for \(\bar{W} \notin A_{2}\) and \(\bar{W} \in A_{2}\), starting with the case \(\bar{W} \notin A_{2}\).
By Lemma 2.8 parts (a) and (b)(iii) together with the fact that \(Nh \leq (\sigma \sqrt{2\pi})^{1} \leq 0.4/\sigma \), we have that \(f'(\bar{W}) \leq \bar{W}/\sigma + 0.4/\sigma \) when \(\bar{W} \in A_{1} \cup A_{3}\). Thus, using that \((a+b)^{q} \leq 2^{q1}(a^{q} + b^{q})\) whenever \(a, b > 0\) and \(q \geq 1\) together with \(\bar{W} \leq W + 1/\sigma \), we have
where we used that \(\mathbb{E}W^{2p/(2p1)} \leq (\mathbb{E}W^{2})^{ \frac{2p}{2(2p1)}} = 1\).
Now, when \(\bar{W} \in A_{2}\) we have \(f'(\bar{W}) \leq 1\) and \(W^{(i)} = x  (1p_{i})/\sigma \). The latter fact follows in a similar way as the case \(\bar{W} \in A_{2}\) in the proof of Lemma 2.9. For example, if \(\bar{W} \in A_{2}\) then we cannot have \(W^{(i)} = x + p_{i}/\sigma \) as then \(W^{(i)} + t \in [x, x+1/\sigma ]\) and \(W \in \{x, x+1/\sigma \}\), and it is impossible for W̄ to be strictly between W and \(W^{(i)} + t \) while at the same time \(\bar{W} \in A_{2}\). Similarly, we see that it is impossible for \(W^{(i)} = x  2/\sigma + p_{i}/\sigma \). Thus, we have from (2.15) that for all \(x \in \mathcal{A}_{n}\)
From (4.8) and (4.9) we have that
and so
and using this in (4.7) and letting \(p\to \infty \) gives (2.40).
As in the proof of Lemma 2.9 (a), we note that when \(0 \leq x < 1/\sigma \), then Lemma 2.8 (a) and (c) provide a tighter bound on \(f'(w)\), \(w\in A_{1}\cup A_{3}\). This together with the fact that (4.9) holds for all \(x \in \mathcal{A}_{n}\) gives the result when \(0 \leq x < 1/\sigma \). The case \(x < 0\) is dealt with in a similar way using part (d) of Lemma 2.8.
(b) Our strategy is slightly different than for part (a). We will consider the contributions to \(\mathbb{E}W^{(i)}f'(\bar{W})\) when W̄ is in \(A_{1}\), \(A_{2}\), and \(A_{3}\). As \(f'(\bar{W})\) is positive when \(\bar{W}\in A_{2}\) and negative otherwise, together with the fact that \(\bar{W}W^{(i)} \leq 1/\sigma \), we will be able to keep track of the signs of the various contributions and obtain some partial cancellation that would not be possible with a simple use of Hölder’s inequality as in part (a).
We first assume that \(x > 1.5\).
Case 1: \(\bar{W} \in A_{3}\).
In this case \(\bar{W}> x > 1.5\) and as \(\bar{W}  W^{(i)} < 1/\sigma \leq 1/\sqrt{5}\), we also have \(W^{(i)} > 0\) and hence \(W^{(i)}f'(\bar{W}) < 0\). Thus by Lemma 2.8 (a)
and so
Case 2: \(\bar{W} \in A_{1}\).
We write \(A_{1} = (\infty ,0) \cup [0, \frac{3}{4} (x1/\sigma ) ] \cup (\frac{3}{4} (x1/\sigma ), x1/\sigma ]\) and consider the contribution to from each set. For \(\bar{W} \in (\infty ,0)\) we have that \(W^{(i)} \in (\infty , 1/\sigma )\) and
and the first term on the right of (4.11) is positive and the second negative. Now, from Lemma 2.8 (b)(i) we have that
and so using the fact that \(\bar{W}W^{(i)} \leq 1/\sigma \) gives
Using the bound for Nh from Lemma 2.7 (a), we get
For the second term in (4.11) we have
Thus, as the second term in (4.11) is negative, we have
and so together with (4.15) this implies
Now suppose that \(\bar{W}\in [0, \frac{3}{4} (x  1/\sigma ) ]\) and write
with the first term negative and the second positive. Now applying Lemma 2.8 (b)(ii) we have
where we used that \(\mathbb{E}W^{(i)}\bar{W} \leq \mathbb{E}W^{(i)}^{2} + \sigma ^{1} \mathbb{E}W^{(i)}\). Hence,
Also,
which together with (4.17) implies that
Now consider \(\bar{W} \in (\frac{3}{4}(x1/\sigma ), x1/\sigma ]\). As \(x > 1.5\) and \(\sigma \geq \sqrt{5}\), we have \(\frac{3}{4}(x1/\sigma ) > 0.78\), and thus \(W^{(i)} > 0\) and \(W^{(i)} f'(\bar{W}) < 0\) in this case.
We have by Lemma 2.8 (b)(iii) that for each \(p\in \mathbb{N}\),
Now, using (2.20) we find
and using this together with (2.19) in (4.19) and letting \(p\to \infty \) gives
Hence,
and combining this with our bounds (4.16) and (4.18), we find that
where we used that \(\sigma \geq \sqrt{5}\) in the last line. Together with our bound (4.10) we have
Case 3: \(\bar{W} \in A_{2}\).
In this case, \(W^{(i)} \geq x2/\sigma \geq 0\), and so as \(f'(\bar{W})\in [0,1]\) we have \(W^{(i)}f'(\bar{W}) \geq 0\). As in part (a), \(W^{(i)} = x1/\sigma +p_{i}/\sigma \) and so again using Holder’s inequality
Letting \(p\to \infty \), we get from (2.19) and (2.29) that
Since \(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} + 1.35 > 1.9e^{1/ \sigma}(1+e^{1/\sigma}) + 0.515\) and \(3e^{\frac{7}{3\sigma}} + e^{\frac{8}{7} + \frac{1}{\sigma}} > 1.45e^{ \frac{8}{7} + \frac{1}{\sigma}}\) when \(\sigma \geq \sqrt{5}\), we have from (4.21) and (4.23) that
The case \(x < 1.5\) follows in a similar way. □
4.3 Proof of Lemma 2.11
We assume that \(x \geq 0\) and the sets \(A_{1}\), \(A_{2}\), and \(A_{3}\) are as in the proofs of the previous lemmas. From (2.32), we see that f is negative on \(A_{1}\) and positive on \(A_{3}\).
From (2.32), the tail bound (2.35), and Lemma 2.7, we have that
if \(x \geq 1\). Since \(e^{w^{2}/2}[1\Phi (w)]\) is a decreasing function of w, we have that when \(x \in [0,1]\)
and from (4.24), we see this holds for all \(x \geq 0\).
Now we consider the case where \(W^{(i)}+t \in A_{1}\). Again, from (2.32) we have that
using that \(e^{7t^{2}/32} \leq e^{8/7}e^{t}\) for \(t \geq 0\) and from the tail bounds (2.36) we see that this also holds for \(W^{(i)} + t \leq 0\). Thus,
Now, by Markov’s inequality, (2.20), and the fact that \(f(w)\leq 1/\sigma \), we have
and thus
Finally, when \(W^{(i)}+t \in A_{2}\), we have that \(W^{(i)} = x 1/\sigma +p_{i}/\sigma \), and so
From (4.25), (4.26), (4.27), and (4.28) we see that the claimed bound holds. The case where \(x < 0\) is dealt with in a similar way.
Data Availability
No datasets were generated or analysed during the current study.
References
Cox, D.R.: The continuity correction. Biometrika 57, 217–219 (1970)
Emura, T., Liao, Y.T.: Critical review and comparison of continuity correction methods: the normal approximation to the binomial distribution. Commun. Stat., Simul. Comput. 47, 2266–2285 (2017)
Chen, L.H.Y., Goldstein, L., Shao, Q.: Normal Approximation by Stein’s Method. Probability and Its Applications. Springer, Heidelberg (2011)
Goldstein, L., Reinert, G.: Stein’s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7, 935–952 (1997)
Fang, X.: Discretized normal approximation by Stein’s method. Bernoulli 20, 1404–1431 (2014)
McDonald, D.: The local limit theorem: a historical perspective. JIRSS 4, 73–86 (2005)
Petrov, V.V.: Sums of Independent Random Variables. de Gruyter, Berlin, Boston (1975)
Zolotukhin, A., Nagaev, S., Chebotarev, V.: On a bound of the absolute constant in the BerryEsseen inequality for i.i.d. Bernoulli random variables. Mod. Stoch. Theory Appl. 5, 385–410 (2018)
Siripraparat, T., Neammanee, K.: A local limit theorem for Poisson binomial random variables. ScienceAsia 47, 111–116 (2021)
Auld, G., Neammanee, K.: A nonuniform local limit theorem for Poisson binomial random variables via Stein’s method. J. Inequal. Appl. (2024)
Chen, L.H.Y., Shao, Q.M.: Normal approximation under local dependence. Ann. Probab. 32, 1985–2028 (2004)
Röllin, A.: Translated Poisson approximation using exchangeable pair couplings. Ann. Appl. Probab. 17, 1596–1614 (2007)
Barbour, A.D., Röllin, A., Ross, N.: Error bounds in local limit theorems using Stein’s method. Bernoulli 25, 1076–1104 (2019)
Röllin, A.: Symmetric and centered binomial approximation of sums of locally dependent random variables. Electron. J. Probab. 13, 756–776 (2008)
Stein, C.: Approximate Computation of Expectations. Lecture NotesMonograph Series. Institute of Mathematical Statistics, Hayward (1986)
Goldstein, L.: BerryEsseen bounds for combinatorial central limit theorems and pattern occurrences, using zero and size biasing. J. Appl. Probab. 42, 661–683 (2005)
El Karoui, N., Jiao, Y.: Stein’s method and zero bias transformation for CDO tranche pricing. Finance Stoch. 13, 151–180 (2009)
Barbour, A.D., Jensen, J.L.: Local and tail approximations near the Poisson limit. Scand. J. Stat. 16, 75–87 (1989)
Andrews, G.E., Eriksson, K.: Integer Partitions. Cambridge University Press, Cambridge (2004)
Acknowledgements
The authors are grateful to two anonymous reviewers for their comments that helped improve our paper. KN is grateful to the Centre of Excellence in Mathematics for financial support.
Funding
This research is supported by Ratchadapisek Somphot Fund for Postdoctoral Fellowship, Chulalongkorn University.
Author information
Authors and Affiliations
Contributions
Both authors contributed equally to this research.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Auld, G., Neammanee, K. Explicit constants in the nonuniform local limit theorem for Poisson binomial random variables. J Inequal Appl 2024, 67 (2024). https://doi.org/10.1186/s1366002403143z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366002403143z