In this section, we will apply Theorem 3.1 to the classical Szàsz-Mirakyan operators \((L_{n}, n=1,2,\ldots)\). From a probabilistic viewpoint, such operators can be represented as follows. Let \((N_{\lambda}, \lambda\geq0)\) be the standard Poisson process, i.e., a stochastic process starting at the origin, having independent stationary increments and nondecreasing paths such that
$$ P(N_{\lambda}=k)=e^{-\lambda}\frac{\lambda^{k}}{k!},\quad k=0,1,\ldots ,\, \lambda\geq0. $$
(33)
Let \(n=1,2,\ldots\) and \(x\geq0\). Thanks to (33), the Szàsz-Mirakyan operator \(L_{n}\) can be written as
$$ L_{n}f(x)=\sum_{k=0}^{\infty}f \biggl( \frac{k}{n} \biggr) e^{-nx} \frac {(nx)^{k}}{k!}=Ef \biggl( \frac{N_{nx}}{n} \biggr), $$
(34)
where \(f\in\mathcal{C}([0,\infty))\). It is well known that
$$ E \biggl( \frac{N_{nx}}{n} \biggr)=x, \qquad E \biggl( \frac {N_{nx}}{n}-x \biggr)^{2}=\frac{x}{n}. $$
(35)
Accordingly, we choose in this case (recall (9), (10), and (12), as well as the subsequent comments)
$$ \sigma_{\varepsilon}(y)=\min \biggl( \frac{y}{\varepsilon},\sigma (y) \biggr),\qquad \sigma(y)=\sqrt{y},\quad y\geq0; \qquad \varepsilon = \frac{1}{\sqrt{n}}, \qquad a_{\varepsilon}=\frac{1}{n}. $$
(36)
As follows from (12), the set of nodes \(\mathcal{N}_{\varepsilon}=\{x_{i}, i\geq-(m+1)\}\), for \(\varepsilon=1/\sqrt{n}\), is given by
$$ x_{-(m+1)}=0, \qquad x_{0}=x, \qquad x_{i+1}-x_{i}=\sqrt{\frac {x_{i+1}}{n}},\quad i\geq-m, $$
(37)
\(x_{-m}\) being the unique node in the interval \((0,1/n]\). In order to apply Theorem 3.1 to Szàsz-Mirakyan operators, we need to estimate the quantities \(\delta_{\varepsilon}\) and \(Eg_{\varepsilon}(N_{nx}/n)\), for \(\varepsilon=1/\sqrt{n}\). In this regard, the following two auxiliary results will be useful.
Lemma 5.1
If
\(n=1,2,\ldots\) , then
\(\delta_{1/\sqrt{n}}\leq1/\sqrt{2n}\), where
\(\delta_{\varepsilon}\)
is defined in (18).
Proof
Denote
$$ q(x_{i})=\frac{x_{i+1}-x_{i}}{2\sigma ( (x_{i}+x_{i+1})/2 )}=\sqrt{\frac{x_{i+1}}{2n(x_{i}+x_{i+1})}}, \quad i\geq-m, $$
(38)
where the last equality follows from (37). Observe that
$$ q(x_{-(m+1)})=\frac{x_{-m}}{\sqrt{2x_{-m}}}< \frac{1}{\sqrt{2n}}. $$
For \(i\geq-m\), we have from (37) and (38)
$$ q^{2}(x_{i})= \frac{x_{i+1}}{2n ( 2x_{i+1}-\sqrt{x_{i+1}/n} )}\leq\frac{1}{2n}, $$
since \(x_{i+1}\geq1/n\). This, together with (18), completes the proof. □
With the notations given in (36) and (37), we state the following lemma.
Lemma 5.2
Let
\(n=1,2,\ldots\)
and
\(x>0\). If
\(x>a_{\varepsilon}=1/n\), then:
-
(a)
For any
\(-s=-1,\ldots, -m+1\), we have
$$ \sum_{i=-m}^{-(s+1)}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{-}\leq P\bigl(N_{nx}\leq \lceil nx_{-s}-1 \rceil\bigr)\leq\frac{nx}{ (n(x-x_{-s})+1 )^{2}} . $$
-
(b)
For any
\(l=1,2,\ldots\) , we have
$$ \sum_{i=l+1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+}\leq P\bigl(N_{nx}\geq \lfloor nx_{l} \rfloor\bigr)\leq\frac{x}{n(x_{l}-x)^{2}} . $$
-
(c)
If
\(x_{-1}\geq a_{\varepsilon}=1/n\), then
$$\begin{aligned}& \frac{1}{2\varepsilon\sigma_{\varepsilon}(x)} E\biggl\vert \frac {N_{nx}}{n}-x\biggr\vert + \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-1})} E \biggl( \frac{N_{nx}}{n}-x_{-1} \biggr)_{-} \\& \quad {} +\sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+}\leq \frac{c+1}{4}+\frac {1}{4(c+1)}, \quad c=\sqrt{\frac{x}{x_{-1}}} . \end{aligned}$$
(39)
If
\(x\leq a_{\varepsilon}=1/n\), then
$$ \sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+}\leq P(N_{nx}\geq1). $$
(40)
Proof
Let \(\lambda>0\). We first claim that
$$ E(N_{\lambda}-u)_{-}\leq u P\bigl(N_{\lambda}=\lceil u-1 \rceil\bigr), \quad u\leq\lambda, $$
(41)
as well as
$$ E(N_{\lambda}-u)_{+}\leq\lambda P\bigl(N_{\lambda}=\lfloor u \rfloor\bigr),\quad u\geq \lambda. $$
(42)
In fact, it follows from (33) that \(kP (N_{\lambda}=k)=\lambda P(N_{\lambda}=k-1)\), \(k=1,2,\ldots\) . Therefore,
$$\begin{aligned} E(N_{\lambda}-u)_{-} =&\sum_{k< u} (u-k) P(N_{\lambda}=k)=uP (N_{\lambda}< u)-\lambda P(N_{\lambda}< u-1) \\ =& u P\bigl(N_{\lambda}\in [u-1,u )\bigr)+(u-\lambda) P(N_{\lambda}< u-1)\leq u P\bigl(N_{\lambda}= \lceil u-1 \rceil\bigr), \end{aligned}$$
thus showing (41). Inequality (42) follows in a similar way. Second, we claim that
$$ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{-}\leq n (x_{i+1}-x_{i}) P\bigl(N_{nx}=\lceil nx_{i}-1 \rceil\bigr), $$
(43)
for \(i=-m,\ldots, -(s+1)\). Actually, suppose that \(i=-m+1,\ldots, -(s+1)\). By (36), (37), and (41), the left-hand side in (43) is bounded above by
$$ \frac{\varepsilon\sigma_{\varepsilon}(x_{i})}{n(\varepsilon\sigma _{\varepsilon}(x_{i}))^{2}} E(N_{nx}-nx_{i})_{-}\leq n (x_{i+1}-x_{i})P\bigl(N_{nx}=\lceil nx_{i}-1 \rceil\bigr). $$
As seen in (37), we have \(x_{-m}\leq a_{\varepsilon}= 1/n < x_{-m+1}\). Therefore,
$$\begin{aligned}& \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-m})} E \biggl( \frac {N_{nx}}{n}-x_{-m} \biggr)_{-} \\& \quad = P(N_{nx}=0)\leq\sqrt{nx_{-m+1}} P (N_{nx}=0) \\& \quad \leq n(x_{-m+1}-x_{-m})P\bigl(N_{nx}=\lceil nx_{-m}-1\rceil\bigr). \end{aligned}$$
Claim (43) is shown. On the other hand, it follows from (33) that the function \(h(k)=P(N_{\lambda}=k)\) is nondecreasing for \(0\leq k\leq\lfloor\lambda\rfloor\). This implies that
$$\begin{aligned}& \sum_{i=-m}^{-(s+1)}(nx_{i+1}-n x_{i})P\bigl(N_{nx}=\lceil nx_{i}-1 \rceil\bigr) \\& \quad \leq \int_{0}^{nx_{-s}}P \bigl(N_{nx}=\lceil u-1 \rceil\bigr) \,du \\& \quad =\sum_{k=0}^{\lceil nx_{-s}-1 \rceil-1}P(N_{nx}=k)+ \int_{\lceil nx_{-s}-1\rceil}^{nx_{-s}} P\bigl(N_{nx}=\lceil u-1 \rceil\bigr) \,du \\& \quad \leq P\bigl(N_{nx}\leq\lceil nx_{-s}-1 \rceil\bigr). \end{aligned}$$
(44)
On the other hand, by Markov’s inequality, we have
$$\begin{aligned} P\bigl(N_{nx}\leq\lceil nx_{-s}-1 \rceil\bigr) \leq& P \bigl(N_{nx}-nx\leq n(x_{-s}-x)-1\bigr) \\ \leq&\frac{E(N_{nx}-nx)^{2}}{(n(x-x_{-s})+1)^{2}}=\frac{nx}{(n(x-x_{-s})+1)^{2}}, \end{aligned}$$
where the last equality follows from (35). This, together with (43) and (44), shows part (a). Part (b) follows in a similar manner, using (42) instead of (41).
To show part (c), note that \(\sigma_{\varepsilon}(x_{i})=\sigma(x_{i})\), \(i\geq-1\), because \(x_{-1}\geq a_{\varepsilon}=1/n\). Consider the function
$$ h_{\varepsilon}(y)= \frac{1}{2\varepsilon\sigma(x)} \biggl\vert \frac {y}{n}-x\biggr\vert + \frac{1}{\varepsilon\sigma(x_{-1})} \biggl( \frac {y}{n}-x_{-1} \biggr)_{-}+ \sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma (x_{i})} \biggl( \frac{y}{n}-x_{i} \biggr)_{+}, \quad y \geq0. $$
If \(y< nx\), it is easily checked from (36) and (37) that
$$ h_{\varepsilon}(y)=\frac{1}{2}\biggl\vert \frac{y-nx}{\sqrt{nx}}\biggr\vert +c \biggl(\biggl\vert \frac{y-nx}{\sqrt{nx}}\biggr\vert -1 \biggr)_{+}\leq\varphi _{c} \biggl( \biggl\vert \frac{y-nx}{\sqrt{nx}} \biggr\vert \biggr), $$
(45)
where the last inequality follows from Lemma 5.1. Similarly, if \(y\geq nx\) and \(i\geq1\), we have
$$ \frac{1}{\varepsilon\sigma(x_{i})} \biggl( \frac{y}{n}-x_{i} \biggr)_{+}=\sqrt{ \frac{x}{x_{i}}} \biggl( \frac{y-nx}{\sqrt{nx}}-\frac{\sqrt {x_{1}}+\cdots+\sqrt{x_{i}}}{\sqrt{x}} \biggr)_{+}\leq \biggl( \biggl\vert \frac {y-nx}{\sqrt{nx}} \biggr\vert -i \biggr)_{+}, $$
thus implying, by virtue of Lemma 5.1, that
$$ h_{\varepsilon}(y)\leq\frac{1}{2}\biggl\vert \frac{y-nx}{\sqrt{nx}} \biggr\vert +\sum_{i=1}^{\infty}\biggl( \biggl\vert \frac{y-nx}{\sqrt {nx}}\biggr\vert -1 \biggr)_{+}\leq \varphi_{c} \biggl(\biggl\vert \frac{y-nx}{\sqrt {nx}} \biggr\vert \biggr). $$
(46)
We therefore have from (35), (45), and (46)
$$ Eh_{\varepsilon}(N_{nx})\leq E\varphi_{c} \biggl( \biggl\vert \frac {N_{nx}-nx}{\sqrt{nx}} \biggr\vert \biggr)=\frac{c+1}{4}+ \frac{1}{4(c+1)}. $$
This shows (39). Finally, we will show inequality (40). From (36) and (42), we get
$$\begin{aligned}& \sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+} \\& \quad \leq\sum_{i=1}^{\infty}\frac {x}{\varepsilon\sqrt{x_{i}}} P\bigl(N_{nx}=\lfloor nx_{i} \rfloor\bigr)=\sum_{i=1}^{\infty}\frac{nx}{\sqrt{nx_{i}}}P \bigl(N_{nx}=\lfloor nx_{i} \rfloor \bigr) \\& \quad \leq\sum _{i=1}^{\infty}P\bigl(N_{nx}=\lfloor nx_{i} \rfloor\bigr)\leq P(N_{nx \geq1}), \end{aligned}$$
since, by assumption and (37), we have \(nx\leq1< nx_{i}\) and \(nx_{i+1}-nx_{i} = \sqrt{nx_{i+1}}>1\), \(i=1,2,\ldots\) . This shows (40) and completes the proof. □
Denote
$$ K_{n}(x)=Eg_{1/\sqrt{n}} \biggl( \frac{N_{nx}}{n} \biggr),\quad n=1,2,\ldots,\, x>0, $$
(47)
where \(g_{\varepsilon}\) is defined in (17). For the Szàsz-Mirakyan operator defined in (34), we enunciate the following result.
Theorem 5.3
Let
\(n=1,2,\ldots\) , \(x>0\), and
\(\sigma(y)=\sqrt{y}\), \(y\geq0\). Then:
-
(a)
If
\(f\in\mathcal{C}([0,\infty))\), then
$$ \bigl\vert L_{n} f(x)-f(x)\bigr\vert \leq K_{n}(x) \omega_{\sigma}^{2} \biggl( f;\frac{1}{\sqrt {n}} \biggr)+ \bigl(1+K_{n}(x)\bigr)\omega_{\sigma}^{2} \biggl( f; \frac{1}{\sqrt {2n}} \biggr). $$
-
(b)
If
\(f\in\mathcal{C}_{cx}([0,\infty))\), then
$$ \bigl\vert L_{n} f(x)-f(x)\bigr\vert \leq K_{n}(x) \biggl( \omega_{\sigma}^{2} \biggl( f;\frac {1}{\sqrt{n}} \biggr)+ \omega_{\sigma}^{2} \biggl( f;\frac{1}{\sqrt {2n}} \biggr) \biggr). $$
-
(c)
If
\(f\in L_{\alpha}([0,\infty))\), for some
\(\alpha\in (0,2]\), then
$$ \bigl\vert L_{n}f(x)-f(x)\bigr\vert \leq \bigl( K_{n}(x)+2^{-\alpha/2}\bigl(1+K_{n}(x)\bigr) \bigr)n^{-\alpha/2}. $$
The upper constants
\(K_{n}(x)\)
defined in (47) satisfy the following properties:
$$ \lim_{n\to\infty} K_{n}(x)=K=0.58333\ldots, \quad x>0, $$
(48)
where
K
is the same constant as that in (31), as well as
$$ 1\leq\sup\bigl\{ K_{n}(x): n=1,2,\ldots,\, x>0\bigr\} \leq1+\frac{1}{5}. $$
(49)
Proof
Parts (a)-(c) are direct consequences of Theorem 3.1, by choosing \(\varepsilon=1/\sqrt{n}\) and \(Y=N_{nx}/n\), taking into account that \(\delta_{1/\sqrt{n}}\leq1/\sqrt{2n}\), as follows from Lemma 5.1.
To show (48), fix \(x>0\) and \(0<\tau<x\). Choose n large enough so that \(a_{\varepsilon}=1/n< x-\tau\). Let \(s=1,2,\ldots\) and \(l=2,3,\ldots\) be such that
$$ x_{-(s+1)}< x-\tau\leq x_{-s}, \qquad x_{l}\leq x+\tau< x_{l+1}. $$
(50)
Let \(i=1,\ldots,l-1\). From (36), (37), and (50), we see that
$$ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{+}=\sqrt{ \frac{x}{x_{i}}}E \biggl( \biggl\vert \frac {N_{nx}-nx}{\sqrt{nx}}\biggr\vert - \bar{x}_{i} \biggr)_{+}, \quad \bar {x}_{i}= \frac{\sqrt{x_{i}}+\cdots+\sqrt{x_{i}}}{\sqrt{x}}. $$
Again by (50), this implies that
$$\begin{aligned}& \sqrt{\frac{x}{x+\tau}} E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert -i \sqrt{\frac{x+\tau}{x}} \biggr)_{+} \\& \quad \leq\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{+}\leq E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert -i \biggr)_{+}. \end{aligned}$$
(51)
Similarly, we have, for \(i=-s,\ldots,-1\),
$$\begin{aligned} E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt{nx}}\biggr\vert -i \biggr)_{+} &\leq\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{-} \\ &\leq\sqrt{\frac{x}{x-\tau}}E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert -i \sqrt{\frac{x-\tau}{x}} \biggr)_{+}. \end{aligned}$$
(52)
On the other hand, by the central limit theorem for the standard Poisson process, the random variable \((N_{nx}-nx)/\sqrt{nx}\) converges in law to the standard normal random variable Z, as \(n\to\infty\). Therefore, by the Helly-Bray theorem (cf. Billingsley [21], pp.335-338), we get from Lemma 5.2, (51), and (52)
$$\begin{aligned}& E \frac{|Z|}{2}+ \sqrt{\frac{x}{x+\tau}}\sum _{i=1}^{\infty}E \biggl( |Z|-i \sqrt{\frac{x+\tau}{x}} \biggr)_{+}+ \sum_{i=-\infty}^{-1} E\bigl(\vert Z \vert -i\bigr)_{+} \\& \quad \leq\varliminf_{n\to\infty} K_{n}(x)\leq\varlimsup_{n\to\infty} K_{n}(x) \\& \quad \leq E \frac{|Z|}{2}+\sum_{i=1}^{\infty}E\bigl(\vert Z\vert -i\bigr)_{+}+\sqrt{\frac{x}{x-\tau}}\sum _{i=-\infty }^{-1}E \biggl( |Z|-i\sqrt{\frac{x-\tau}{x}} \biggr)_{+}. \end{aligned}$$
Thus, (48) follows from (31) and Corollary 4.1 by letting \(\tau\to0\) in these last inequalities.
To show (49), let d be the largest solution to the equation
$$ d-\sqrt{d}-\sqrt{d-\sqrt{d}}=1,\quad d=\frac{4+\sqrt{5}+\sqrt{7+2\sqrt {5}}}{2}=4.811561 \ldots $$
(53)
and define the points
$$ x^{\star}=\frac{d}{n}, \qquad x^{\star}_{-1}=x^{\star}- \varepsilon\sigma \bigl(x^{\star}\bigr)=\frac{d-\sqrt{d}}{n}, \qquad x^{\star}_{-2}=x^{\star}_{-1}-\varepsilon\sigma \bigl(x^{\star}_{-1}\bigr)=\frac{1}{n}. $$
(54)
We distinguish the following cases:
Case 1. \(0< x\leq x^{\star}_{-2}=1/n\). Since \(EN_{nx}/n=x\), we have from (36)
$$ \frac{1}{2\varepsilon\sigma_{\varepsilon}(x)} E \biggl\vert \frac {N_{nx}}{n}-x\biggr\vert = \frac{1}{2x} E\biggl\vert \frac{N_{nx}}{n}-x \biggr\vert = \frac{1}{x} E \biggl( \frac{N_{nx}}{n}-x \biggr)_{-}=P(N_{nx}=0). $$
We therefore have from (40) and (47)
$$ e^{-nx}=P(N_{nx}=0)\leq K_{n}(x)\leq P(N_{nx}=0)+P(N_{nx}\geq1)=1. $$
(55)
Letting \(x\to0\) in (55), we get the first inequality in (49).
Case 2. \(x^{\star}_{-2}=1/n< x\leq x^{\star}_{-1}\). From (54), we see that \(x_{-1}\leq x^{\star}_{-1}-\varepsilon\sigma (x^{\star}_{-1})=1/n\), thus implying that
$$ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-1})} E \biggl( \frac {N_{nx}}{n}-x_{-1} \biggr)_{-}=P(N_{nx}=0)\leq P(N_{1}=0)=e^{-1}. $$
Hence, we have from (47), (37), (51), and Lemma 5.1
$$\begin{aligned} K_{n}(x) \leq& e^{-1}+ \frac{1}{2\varepsilon\sigma_{\varepsilon}(x)}E \biggl\vert \frac{N_{nx}}{n}-x\biggr\vert +\sum_{i=1}^{\infty}\frac {1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{+} \\ \leq& e^{-1}+\frac{1}{2} E \biggl\vert \frac{N_{nx}-nx}{\sqrt{nx}} \biggr\vert +\sum_{i=1}^{\infty}E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt{nx}} \biggr\vert -i \biggr)_{+}\leq e^{-1}+ E \psi \biggl(\biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}} \biggr\vert \biggr) \\ \leq& e^{-1}+E\varphi_{1} \biggl(\biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert \biggr)=e^{-1}+\frac{1}{2}+ \frac{1}{8}< 1, \end{aligned}$$
where we have used (35) in the last equality.
Case 3. \(x^{\star}_{-1}< x\leq x^{\star}\). Again by (54), we see that \(x_{-2}\leq1/n < x_{-1}\). Thus,
$$\begin{aligned}& \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-2})} E \biggl( \frac {N_{nx}}{n}-x_{-2} \biggr)_{-}+ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-1})}E \biggl( \frac{N_{nx}}{n}-x_{-1} \biggr)_{-} \\& \quad =P(N_{nx}=0)+\sqrt{\frac{n}{x_{-1}}} \biggl( x_{-1} P(N_{nx}=0)+ \biggl( x_{-1}-\frac{1}{n} \biggr) P(N_{nx}=1) \biggr). \end{aligned}$$
(56)
Set \(\lambda=nx\) and note that \(\lambda> nx^{\star}_{-1}=d-\sqrt{d}\), as follows from (54). Since \(nx_{-1}=nx-\sqrt{nx}=\lambda-\sqrt {\lambda}\), the right-hand side in (56) becomes after some simple computations
$$ \biggl( 1+\lambda-\frac{\sqrt{\lambda}}{\sqrt{\lambda-\sqrt{\lambda }}} \biggr) e^{-\lambda} \leq(1+\lambda)e^{-\lambda}\leq(1+d-\sqrt {d})e^{-(d-\sqrt{d})}\leq0.264, $$
(57)
as follows from (53). As in Case 2, we have from (56) and (57)
$$ K_{n}(x)\leq0.264 + E\varphi_{1} \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert \biggr)= 0.264+\frac{1}{2}+ \frac{1}{8}< 1. $$
Case 4. \(x^{\star}< x\). We claim that
$$ \sum_{i=-m}^{-2} \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{-}\leq P \bigl(N_{nx}\leq\lceil nx_{-1}-1 \rceil\bigr)\leq \frac{1}{2}. $$
(58)
Actually, the first inequality in (58) readily follows from Lemma 5.2(a). As far as the second one is concerned, observe that \(nx_{-1}=nx-\sqrt{nx}< nx-1\), which implies that \(\lceil nx_{-1}-1 \rceil\leq\lceil nx-2\rceil\leq\lfloor nx \rfloor-1\). Therefore,
$$ P\bigl(N_{nx}\leq\lceil nx_{-1}-1 \rceil\bigr) \leq P\bigl(N_{nx}\leq\lfloor nx \rfloor-1\bigr)\leq P \bigl(N_{\lfloor nx \rfloor}\leq\lfloor nx \rfloor-1\bigr), $$
(59)
since \(N_{\lfloor nx \rfloor}\leq N_{nx}\). On the other hand, it has been shown in [22] that the sequence \((P(N_{k}\leq k-1), k=1,2,\ldots)\) strictly increases to \(1/2\). This, together with (59), shows claim (58).
Finally, it is easy to see that the function \(\sqrt{x/x_{-1}}\), \(x\geq x^{\star}\), strictly decreases. It therefore follows from (54) that
$$ \sqrt{\frac{x}{x_{-1}}}\leq\sqrt{\frac{x^{\star}}{x_{-1}^{\star}}}= \sqrt {\frac{d}{d-\sqrt{d}}}=:c^{\star}. $$
(60)
We thus have from (58), (60), and Lemma 5.2(c)
$$ K_{n}(x)\leq\frac{1}{2}+\frac{c^{\star}+1}{4}+ \frac{1}{4(c^{\star}+1)}=1.195045\ldots\leq1+\frac{1}{5}. $$
The proof is complete. □
As mentioned in the Introduction, Theorem 5.3 illustrates that the estimates of the general constants in (3) and (4) may be quite different. Such estimates mainly depend on two facts: the set of functions under consideration (parts (a)-(c) in Theorem 5.3), and the kind of estimate we are interested in, namely, pointwise estimate or uniform estimate (see equations (48) and (49), respectively).