Skip to main content

Towards the best constant in front of the Ditzian-Totik modulus of smoothness

Abstract

We give accurate estimates for the constants

$$ K\bigl(\mathcal{A}(I), n, x\bigr)=\sup_{f\in\mathcal{A}(I)}\frac{|L_{n} f(x)-f(x)|}{\omega_{\sigma}^{2} ( f; 1/\sqrt{n} )}, \quad x\in I, n=1,2,\ldots, $$

where \(I=\mathbb {R}\) or \(I=[0,\infty)\), \(L_{n}\) is a positive linear operator acting on real functions f defined on the interval I, \(\mathcal{A}(I)\) is a certain subset of such function, and \(\omega _{\sigma}^{2} (f;\cdot)\) is the Ditzian-Totik modulus of smoothness of f with weight function σ. This is done under the assumption that σ is concave and satisfies some simple boundary conditions at the endpoint of I, if any. Two illustrative examples closely connected are discussed, namely, Weierstrass and Szàsz-Mirakyan operators. In the first case, which involves the usual second modulus, we obtain the exact constants when \(\mathcal{A}(\mathbb {R})\) is the set of convex functions or a suitable set of continuous piecewise linear functions.

1 Introduction

Let I be a closed real interval with nonempty interior set . A function \(\sigma:I\to[0,\infty)\) is called a weight function if \(\sigma(y) >0\), \(y\in\mathring{I}\). The usual second order differences of a function \(f:I\to \mathbb {R}\) are defined as

$$ \Delta_{h}^{2} f(y)=f(y+h)-2f(y)+f(y-h), \quad [y-h,y+h] \subseteq I, h\geq0. $$

Recall (cf. [1]) that the Ditzian-Totik modulus of smoothness of f with weight function σ is defined by

$$ \omega_{\sigma}^{2} (f;\delta)=\sup\bigl\{ \bigl\vert \Delta_{h\sigma(y)} f(y)\bigr\vert : 0\leq h\leq\delta, \bigl[y-h\sigma(y),y+h \sigma(y)\bigr]\subseteq I\bigr\} , \quad \delta\geq0. $$

If \(\sigma\equiv1\), we simply denote by \(\omega^{2} (f;\cdot)=\omega _{\sigma}^{2} (f;\cdot)\) the usual second modulus of smoothness of f. Also, we denote by \(\mathcal{C}(I)\) the set of continuous functions \(f:I\to \mathbb {R}\) such that \(0<\omega_{\sigma}^{2} (f;\delta)<\infty\), \(\delta>0\).

It is well known (see, for instance, [27], and [8]) that many sequences \((L_{n}, n=1,2,\ldots)\) of positive linear operators acting on \(\mathcal{C}(I)\) satisfy direct and converse inequalities of the form

$$ K_{1} \omega_{\sigma}^{2} \biggl(f; \frac{1}{\sqrt{n}} \biggr)\leq\sup_{x\in I} \bigl\vert L_{n} f(x)-f(x)\bigr\vert \leq K_{2} \omega_{\sigma}^{2} \biggl(f;\frac{1}{\sqrt {n}} \biggr),\quad n=1,2,\ldots, $$
(1)

where \(f\in\mathcal{C}(I)\), \(K_{1}\) and \(K_{2}\) are absolute constants, and σ is an appropriate weight function depending on the operators under consideration. From a probabilistic perspective, the weight σ can be understood as follows. Let \(n=1,2,\ldots\) and \(x\in I\), and suppose that we have the representation

$$ L_{n}f(x)=Ef\bigl(Y_{n}(x)\bigr), \quad f\in \mathcal{C}(I), $$

where E stands for mathematical expectation and \(Y_{n}(x)\) is an I-valued random variable whose mean and standard deviation are, respectively, given by

$$ E\bigl(Y_{n}(x)\bigr)=x, \qquad \sqrt{E \bigl(Y_{n}(x)-x\bigr)^{2}}=\frac{\sigma(x)}{\sqrt{n}}. $$
(2)

In such a case, we can write

$$ L_{n}f(x)=E f \biggl( x+ \frac{\sigma(x)}{\sqrt{n}} Z_{n}(x) \biggr),\quad Z_{n}(x)=\frac{Y_{n}(x)-x}{\sigma(x)/ \sqrt{n}}. $$

Since the standard deviation of \(Z_{n}(x)\) equals 1, it seems natural to choose in (1) the weight function σ defined in (2).

Several authors have obtained estimates of the upper constant \(K_{2}\) in (1) for the ordinary second modulus of smoothness, i.e., for \(\sigma\equiv1\). For instance, with regard to the Bernstein polynomials, Gonska [9] showed that \(1\leq K_{2} \leq 3.25\), Păltănea [10] obtained \(K_{2}=1.094\), and finally this last author closed the problem in [11] by showing that \(K_{2}=1\) is the best possible constant. For the Weierstrass operator, Adell and Sangüesa [12] gave \(K_{2}=1.385\). Finally, for a certain class of Bernstein-Durrmeyer operators preserving linear functions, we refer the reader to Gonska and Păltănea [13].

For the Ditzian-Totik modulus in strict sense, i.e., for nonconstant σ, some estimates are also available. In this respect, Adell and Sangüesa [14] showed that \(K_{2}=4\) for the Szàsz operators and for the Bernstein polynomials. For such polynomials, the aforementioned estimate was improved by Gavrea et al. [15] and by Bustamante [16], who obtained \(K_{2}=3\), and finally by Păltănea [11], who showed that \(K_{2}=2.5\). Referring to noncentered operators, that is, operators for which the first equality in (2) is not fulfilled, we mention the estimates for both \(K_{1}\) and \(K_{2}\) with regard to gamma operators proved in [17].

Once it is known that a sequence \((L_{n}, n=1,2,\ldots)\) satisfies (1), a natural question is to ask for the uniform constants

$$ \sup \biggl\{ \frac{|L_{n} f(x)-f(x)|}{\omega_{\sigma}^{2} ( f; \frac {1}{\sqrt{n}} )}: f\in\mathcal{A}(I), n=1,2,\ldots,\, x\in I \biggr\} =K\bigl(\mathcal{A}(I)\bigr), $$
(3)

as well as for the local constants

$$ \sup \biggl\{ \frac{|L_{n} f(x)-f(x)|}{\omega_{\sigma}^{2} ( f; \frac {1}{\sqrt{n}} )}: f\in\mathcal{A}(I) \biggr\} =K \bigl(\mathcal {A}(I), n, x\bigr), \quad n=1,2,\ldots,\, x\in I, $$
(4)

where \(\mathcal{A}(I)\) is a certain subset of \(\mathcal{C}(I)\). Such questions are meaningful, because in specific examples, the estimates for the constants in (3) and (4) may be quite different, mainly depending on the degree of smoothness of the functions in the set \(\mathcal{A}(I)\) and on the distance from x to the boundary of I (see Section 5).

The aim of this paper is to give a general method to provide accurate estimates of the constants in (3) and (4) when \(I=\mathbb {R}\) or \(I=[0,\infty)\). In this last case, the main assumption is that the weight function σ is concave and satisfies a simple boundary condition at the origin (see (9) in Section 2), whereas for \(I=\mathbb {R}\), σ is assumed to be constant. In view of the probabilistic meaning of σ described in (2), such assumptions do not seem to be very restrictive and are fulfilled in the usual examples. The method relies upon the approximation of any function \(f\in\mathcal{C}(I)\) by an interpolating continuous piecewise linear function having an appropriate set of nodes, depending on the weight σ.

The main results are Theorems 3.1 and 3.2, stated in Section 3. To keep the paper in a moderate size, we only consider two illustrative examples. The first one is the classical Weierstrass operator, involving the usual second modulus of smoothness (see Corollary 4.1). In this case, we are able to obtain the exact constants in (3) and (4) when the set \(\mathcal{A}(\mathbb {R})\) is either the set of convex functions or a certain set of continuous piecewise linear functions. The second example refers to the Szàsz-Mirakyan operators (Theorem 5.3). In this case, we give different upper estimates of the aforementioned constants, heavily depending on the set of functions under consideration and on the kind of convergence we are interested in, namely, pointwise convergence or uniform convergence. Both examples are connected in the sense that, roughly speaking, the upper estimates for Szàsz-Mirakyan operators are, asymptotically, the same as those for the Weierstrass operator. This is due to the central limit theorem satisfied by the standard Poisson process, which can be used to represent Szàsz-Mirakyan operators.

2 Continuous piecewise linear functions

If \(I=\mathbb {R}\), we fix \(x\in \mathbb {R}\) and denote by \(\mathcal{N}\) an ordered set of nodes \(\{x_{i}, i\in \mathbb {Z}\}\) with \(x_{0}=x\). If \(I=[0,\infty)\), we fix \(x>0\) and denote by \(\mathcal{N}\) an ordered set of nodes \(\{x_{i}, i\geq-(m+1)\}\) such that \(0=x_{-(m+1)}<\cdots <x_{-1}<x_{0}=x\), for some \(m=0,1,2,\ldots\) . Also, we denote by \(\mathcal {L}(I)\) the set of continuous piecewise linear functions \(g:I\to \mathbb {R}\) whose set of nodes is \(\mathcal{N}\).

Given a sequence \((c_{i}, i\in \mathbb {Z})\), we denote by \(\delta c_{i}= c_{i+1}-c_{i}\), \(i\in \mathbb {Z}\). We set \(y_{+}=\max(0,y)\), \(y_{-}=\max (0,-y)\), and denote by \(1_{A}\) the indicator function of the set A. For the sake of concreteness, we enunciate the following two lemmas for \(I=[0,\infty)\), although both of them are also true for \(I=\mathbb {R}\). We start with the following auxiliary result taken from [18].

Lemma 2.1

For any \(g\in\mathcal{L}([0,\infty))\) and \(y\geq0\), we have the representation

$$\begin{aligned}& g(y)-g(x)-\frac{c_{0}+c_{1}}{2}(y-x) \\& \quad =\frac{\delta c_{0}}{2}|y-x|+\sum_{i=1}^{\infty}\delta c_{i} (y-x_{i})_{+}+\sum_{i=-m}^{-1} \delta c_{i} (y-x_{i})_{-}, \end{aligned}$$
(5)

where

$$ c_{i}=\frac{g(x_{i})-g(x_{i-1})}{x_{i}-x_{i-1}},\quad i\geq-m. $$
(6)

Lemma 2.2

If \(g\in\mathcal{L}([0,\infty))\) and \(h\geq0\), then

$$ \Delta_{h\sigma(y)}^{2} g(y)=\sum _{i=-m}^{\infty}\delta c_{i} \bigl( h \sigma(y)-|y-x_{i}| \bigr)_{+}, \quad y-h\sigma(y) \geq0. $$
(7)

Moreover, if \(\sigma=1\) and \(g\in\mathcal{L}(\mathbb {R})\) has set of nodes \(\mathcal{N}=\{x+i\varepsilon, i\in \mathbb {Z}\}\), for some \(\varepsilon>0\), then

$$ \omega^{2} (g;h)=h\sup\bigl\{ \vert \delta c_{i}\vert , i\in \mathbb {Z}\bigr\} ,\quad 0\leq h\leq\varepsilon. $$
(8)

Proof

Let \(i\geq-m\). Denote \(s_{i}(y)=|y-x_{i}|/2\), \(y\geq0\). It is easily checked that

$$ \Delta_{h\sigma(y)}^{2} s_{i}(y)=\bigl(h \sigma(y)-|y-x_{i}|\bigr)_{+}, \quad h\geq 0, y-h\sigma(y)\geq0. $$

This, together with (5) and the equalities

$$ y_{+}=\frac{1}{2}\bigl(\vert y\vert +y\bigr), \qquad y_{-}= \frac{1}{2}\bigl(\vert y\vert -y\bigr), \quad y\in \mathbb {R}, $$

shows (7). On the other hand, let \(0\leq h \leq\varepsilon\) and denote \(q_{i}(y)=(h-|y-x_{i}|)_{+}\), \(y\in \mathbb {R}\), \(x_{i}=x+i\varepsilon\), \(i \in \mathbb {Z}\). Suppose that \(y\in [x_{j},x_{j+1}]\), for some \(j\in \mathbb {Z}\). Since \(q_{i}(y)=0\), \(i\neq j, j+1\), we have

$$\begin{aligned} h \vert \delta c_{j}\vert =& \bigl\vert \Delta_{h}^{2} g(x_{j})\bigr\vert \leq\sup_{x_{j}\leq y \leq x_{j+1}} \bigl\vert \Delta_{h}^{2} g(y)\bigr\vert =\sup_{x_{j}\leq y \leq{x_{j+1}}} \bigl\vert \delta c_{j} q_{j} (y)+\delta c_{j+1} q_{j+1}(y)\bigr\vert \\ \leq&\max\bigl(\vert \delta c_{j}\vert ,\vert \delta c_{j+1}\vert \bigr)\sup_{x_{j}\leq y \leq x_{j+1}}\bigl(q_{j} (y)+q_{j+1}(y)\bigr)=h\max\bigl(\vert \delta c_{j}\vert , \vert \delta c_{j+1}\vert \bigr). \end{aligned}$$

This shows (8) and completes the proof. □

From now on, we make the following assumptions with respect to the weight function σ. If \(I=\mathbb {R}\), we assume that \(\sigma \equiv1\), whereas if \(I=[0,\infty)\), we assume that σ is concave (and therefore nondecreasing) and satisfies the boundary condition

$$ \lim_{y\to0} \frac{y}{\sigma(y)}=0. $$
(9)

Assumption (9) seems to be essential to guarantee a direct inequality (the upper bound in (1)). Actually, for a weight function σ not satisfying (9), it has been constructed in Section 4 of [14] a sequence \((L_{n}, n=1,2,\ldots)\) of positive linear operators not satisfying the upper inequality in (1). On the other hand, the concavity of σ readily implies that the function \(r(y)=y/\sigma(y)\), \(y>0\), is continuous and strictly increasing. Thus, for any \(\varepsilon>0\), there is a unique number \(a_{\varepsilon}\) such that

$$ a_{\varepsilon}= \varepsilon\sigma(a_{\varepsilon})>0. $$
(10)

Let \(\varepsilon>0\). We construct the set of nodes \(\mathcal {N}_{\varepsilon}\) as follows. If \(I=\mathbb {R}\), we fix \(x\in \mathbb {R}\) and define \(\mathcal{N}_{\varepsilon}=\{x_{i}, i\in \mathbb {Z}\}\) as

$$ x_{i}=x+i\varepsilon, \quad i\in \mathbb {Z}. $$
(11)

If \(I=[0,\infty)\), we define the new concave weight function

$$ \sigma_{\varepsilon}(y)= \min \biggl( \frac{y}{\varepsilon},\sigma (y) \biggr), \quad y \geq0. $$

We fix \(x>0\) and define \(\mathcal{N}_{\varepsilon}=\{x_{i}, i\geq-(m+1)\} \), for some \(m=0,1,\ldots\) , as follows. We start from the point \(x_{0}=x\), move to the right by choosing \(x_{i+1}-x_{i}=\varepsilon\sigma _{\varepsilon}(x_{i+1})\), \(i=0,1,2,\ldots\) , then move to the left by \(x_{i+1}-x_{i}=\varepsilon\sigma_{\varepsilon}(x_{i+1})\), \(i=0,\ldots ,-m\), and lastly setting \(x_{-(m+1)}=0\). In other words,

$$ x_{-(m+1)}=0,\qquad x_{0}=x, \qquad x_{i+1}-x_{i}=\varepsilon\sigma _{\varepsilon}(x_{i+1}), \quad i\geq-(m+1). $$
(12)

It is easy to check that \(x_{-m}\) is the unique node in the interval \((0,a_{\varepsilon}]\). On the other hand, the weight \(\sigma_{\varepsilon}\) is very appropriate to simplify notations near the origin. For instance, we always have \(y-h\sigma_{\varepsilon}(y)\geq0\), \(y\geq0\), \(0\leq h\leq\varepsilon\). Finally, we mention that the procedure to build up the set \(\mathcal{N}_{\varepsilon}\) defined in (12) is close in spirit to the so-called ‘canonical sequence’ in Păltănea [11], Section 2.5.1 (see also Gonska and Tachev [19, 20] and Bustamante [16]).

To close this section, we give the following two auxiliary results. The first one is concerned with the symmetric function

$$ \psi(y)=\frac{1}{2}\vert y\vert +\sum _{i=1}^{\infty}\bigl(\vert y\vert -i\bigr)_{+}, \quad y \in \mathbb {R}. $$
(13)

Also, denote by \(\lfloor x \rfloor\) and \(\lceil x \rceil\) the floor and the ceiling of \(x\in \mathbb {R}\), respectively, that is,

$$ \lfloor x \rfloor=\sup\{k\in \mathbb {Z}: k\leq x\},\qquad \lceil x \rceil=\inf\{k \in \mathbb {Z}: k\geq x\}. $$

Lemma 2.3

Let \(c\geq1\) and let ψ be as in (13). Then,

$$ \max \biggl( \frac{|y|}{2}+c\bigl(|y|-1\bigr)_{+}, \psi(y) \biggr)\leq \frac {c+1}{4}y^{2}+\frac{1}{4(c+1)}=:\varphi_{c}(y), \quad y \in \mathbb {R}. $$
(14)

Proof

No generality is lost if we assume that \(y\geq0\). If \(0\leq y \leq1\), inequality (14) is equivalent to the obvious inequality \(((c+1)y-1)^{2}\geq0\). Suppose that \(1 \leq y\). In this case, the inequality \(y/2+c(y-1)\leq\varphi_{c}(y)\) is equivalent to \(((c+1)y-(2c+1))^{2}\geq0\), which is obviously true. On the other hand, it is readily seen that \(\varphi_{1}(y)\leq\varphi_{c} (y)\), \(c\geq1\). The inequality \(\psi (y)\leq\varphi_{1} (y)\) is equivalent to

$$ \frac{y}{2} + \sum_{i=1}^{\lfloor y \rfloor} (y-i) \leq\frac {y^{2}}{2}+\frac{1}{8}, $$

also equivalent to

$$ \eta(y):= (2y-1)^{2}\geq8 \sum _{i=1}^{\lfloor y \rfloor} (y-i)= 4 \lfloor y \rfloor\bigl(2y - \bigl(1+\lfloor y \rfloor\bigr)\bigr)=:\nu(y). $$
(15)

It is easily checked that

$$ \eta \biggl( m + \frac{1}{2} \biggr)=\nu \biggl( m+\frac{1}{2} \biggr), \qquad \eta' \biggl( m + \frac{1}{2} \biggr)= \nu' \biggl( m+\frac {1}{2} \biggr),\quad m=1,2,\ldots. $$

These equalities imply (15), since η is convex and ν is linear in each interval \([m,m+1)\), \(m=1,2,\ldots\) . The proof is complete. □

The second one is the following lemma proved in Păltănea [11], Lemma 2.5.7 or in Bustamante [16].

Lemma 2.4

Let I be a real interval and let \(f:I\to \mathbb {R}\) be a function such that \(f(c)=f(d)=0\), for some \(c,d\in I\) with \(c\leq d\). If σ is a concave weight function on I, then

$$ \sup_{c\leq y \leq d} \bigl\vert f(y)\bigr\vert \leq \omega_{\sigma}^{2} \biggl( f; \frac {d-c}{2\sigma((c+d)/2)} \biggr). $$

3 Main results

As usual, assume that \(I=\mathbb {R}\) or \(I=[0,\infty)\). Let Y be a random variable taking values in I and fix \(x\in\mathring{I}\). Assume that

$$ EY=x, \qquad E(Y-x)^{2}< \infty. $$
(16)

We will consider the following subsets of functions in \(\mathcal {C}(I)\): \(\mathcal{C}_{cx}(I)\) is the set of convex functions, \(\mathcal {L}_{\varepsilon}(I)\), \(\varepsilon>0\), is the set of functions in \(\mathcal{L}(I)\) whose set of nodes is \(\mathcal{N}_{\varepsilon}\), as defined in Section 2, and \(L_{\alpha}(I)\), \(\alpha\in(0,2]\), is the set of functions f such that \(w_{\sigma}^{2}(f;\delta)\leq\delta ^{\alpha}\), \(\delta\geq0\).

For technical reasons, we start with the case \(I=[0,\infty)\) and fix \(x>0\). For any \(\varepsilon>0\), we define the function \(g_{\varepsilon}\in\mathcal{L}_{\varepsilon}([0,\infty))\) as

$$ g_{\varepsilon}(y)=\frac{1}{2} \frac{|y-x|}{\varepsilon\sigma _{\varepsilon}(x)}+ \sum _{i=1}^{\infty}\frac{(y-x_{i})_{+}}{\varepsilon \sigma_{\varepsilon}(x_{i})}+\sum _{i=-m}^{-1}\frac{(y-x_{i})_{-}}{\varepsilon \sigma_{\varepsilon}(x_{i})}, \quad y\geq0, $$
(17)

as well as the quantity

$$ \delta_{\varepsilon}= \max \biggl\{ \frac{x_{i+1}-x_{i}}{2\sigma ((x_{i}+x_{i+1})/2)}: i \geq-(m+1) \biggr\} . $$
(18)

With these notations, we enunciate our first main result.

Theorem 3.1

Let \(x>0\) and \(\varepsilon>0\). For any \(f\in\mathcal{C}([0,\infty))\), we have

$$ \bigl\vert Ef(Y)-f(x)\bigr\vert \leq Eg_{\varepsilon}(Y) \omega_{\sigma}^{2} (f;\varepsilon )+\bigl(1+Eg_{\varepsilon}(Y) \bigr)\omega_{\sigma}^{2} (f;\delta_{\varepsilon}). $$
(19)

If, in addition, \(f\in\mathcal{C}_{cx}([0,\infty))\), then

$$ \bigl\vert Ef(Y)-f(x)\bigr\vert \leq Eg_{\varepsilon}(Y) \bigl(\omega_{\sigma}^{2} (f;\varepsilon)+\omega_{\sigma}^{2} (f;\delta_{\varepsilon}) \bigr). $$
(20)

Proof

Let \(\tilde{f}\in\mathcal{L}_{\varepsilon}([0,\infty))\) be defined as

$$ \tilde{f}(x_{i})=f(x_{i}),\quad x_{i} \in\mathcal{N}_{\varepsilon}, i\geq-(m+1). $$
(21)

Applying Lemma 2.4 to the function \(f-\tilde{f}\) and recalling (18), we have, for \(i\geq-(m+1)\),

$$ \sup_{x_{i}\leq y\leq x_{i+1}} \bigl\vert f(y)-\tilde{f}(y)\bigr\vert \leq\omega_{\sigma}^{2} \biggl( f; \frac{x_{i+1}-x_{i}}{2\sigma((x_{i}+x_{i+1})/2)} \biggr)\leq \omega_{\sigma}^{2} (f;\delta_{\varepsilon}). $$
(22)

This readily implies that

$$ \bigl\vert Ef(Y)-E\tilde{f}(Y)\bigr\vert \leq \omega_{\sigma}^{2} (f;\delta_{\varepsilon}). $$
(23)

On the other hand, let \(x_{i}\in\mathcal{N}_{\varepsilon}\), \(i\geq-m\). Since \(\sigma_{\varepsilon}\) is nondecreasing, we have from (6), (12), and (21)

$$\begin{aligned} |\delta c_{i}|&=\biggl\vert \frac{\tilde{f}(x_{i+1})-\tilde {f}(x_{i})}{x_{i+1}-x_{i}}- \frac{\tilde{f}(x_{i})-\tilde {f}(x_{i-1})}{x_{i}-x_{i-1}}\biggr\vert \\ &=\biggl\vert \frac{\tilde{f}(x_{i}+\varepsilon\sigma_{\varepsilon}(x_{i}))-f(x_{i})}{\varepsilon\sigma_{\varepsilon}(x_{i})}-\frac {f(x_{i})-f(x_{i-1})}{\varepsilon\sigma_{\varepsilon}(x_{i})}\biggr\vert \\ &\leq\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} \bigl( \omega _{\sigma_{\varepsilon}}^{2} (f; \varepsilon)+ \bigl\vert \tilde{f}\bigl(x_{i}+\varepsilon \sigma_{\varepsilon}(x_{i})\bigr)-f\bigl(x_{i}+\varepsilon \sigma_{\varepsilon}(x_{i})\bigr)\bigr\vert \bigr) \\ & \leq\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} \bigl( \omega _{\sigma}^{2} (f; \varepsilon)+ \omega_{\sigma}^{2} (f; \delta_{\varepsilon}) \bigr), \end{aligned}$$
(24)

where in the last inequality we have used (22) and the fact that \(\omega_{\sigma_{\varepsilon}}^{2} (f;\cdot)\leq\omega_{\sigma}^{2}(f;\cdot)\), since \(\sigma_{\varepsilon}\leq\sigma\). Finally, \(EY=x\), by assumption (16). We thus have from (5)

$$ E\tilde{f}(Y)-f(x)=\frac{\delta c_{0}}{2}E|Y-x|+\sum_{i=1}^{\infty}\delta c_{i} E(Y-x_{i})_{+}+\sum_{i=-m}^{-1} \delta c_{i} E(Y-x_{i})_{-}, $$

which implies, by virtue of (17) and (24), that

$$ \bigl\vert E\tilde{f}(Y)-f(x)\bigr\vert \leq Eg_{\varepsilon}(Y) \bigl( \omega_{\sigma}^{2} (f;\varepsilon)+ \omega_{\sigma}^{2} (f;\delta_{\varepsilon}) \bigr). $$
(25)

Thus, inequality (19) follows from (23) and (25).

Suppose that \(f\in\mathcal{C}_{cx}([0,\infty))\). By subtracting an affine function, if necessary, we can assume without loss of generality that \(f(y)\geq f(x)=0\), \(y\in I\). The convexity of f and (21) imply that

$$ Ef(Y)\leq E\tilde{f}(Y). $$

This, together with (25), shows (20) and completes the proof. □

In the case \(\sigma=1\) and \(I=\mathbb {R}\), Theorem 3.1 takes on a simpler form.

Theorem 3.2

Let \(x\in \mathbb {R}\), \(\varepsilon>0\), and let ψ be as in (13). Then,

$$ \sup_{g\in\mathcal{L}_{\varepsilon}(\mathbb {R})} \frac {|Eg(Y)-g(x)|}{\omega^{2} (g;\varepsilon)}=\sup _{f\in\mathcal {C}_{cx}(\mathbb {R})} \frac{|Ef(Y)-f(x)|}{\omega^{2} (f;\varepsilon)}= E\psi \biggl( \frac{Y-x}{\varepsilon} \biggr). $$
(26)

If \(f\in\mathcal{C}(\mathbb {R})\), then

$$ \bigl\vert Ef(Y)-f(x)\bigr\vert \leq E\psi \biggl( \frac{Y-x}{\varepsilon} \biggr) \omega^{2} (f;\varepsilon)+ \omega^{2} \biggl(f; \frac{\varepsilon}{2} \biggr). $$
(27)

Proof

Let \(g\in\mathcal{L}_{\varepsilon}(\mathbb {R})\). By (11), the set of nodes under consideration in this case is \(\mathcal {N}_{\varepsilon}= \{x+\varepsilon i, i\in \mathbb {Z}\}\). Thus, we have from Lemma 2.1 and (16)

$$\begin{aligned} \bigl\vert Eg(Y)-g(x)\bigr\vert =&\Biggl\vert \frac{\delta c_{0}}{2} E|Y-x|+\sum_{i=1}^{\infty}\delta c_{i} E(Y-x_{i})_{+}+\sum_{i=-\infty}^{-1} \delta c_{i} E(Y-x_{i})_{-} \Biggr\vert \\ \leq&\varepsilon\sup_{i\in \mathbb {Z}} |\delta c_{i}| \Biggl( \frac {1}{2} E \biggl\vert \frac{Y-x}{\varepsilon} \biggr\vert +\sum _{i=1}^{\infty}E \biggl( \frac{Y-x}{\varepsilon}-i \biggr)_{+}+ \sum_{i=-\infty}^{-1} E \biggl( \frac{Y-x}{\varepsilon}-i \biggr)_{-} \Biggr) \\ =& \omega^{2} (g;\varepsilon)E\psi \biggl( \frac{Y-x}{\varepsilon} \biggr), \end{aligned}$$
(28)

where the last equality follows from (8) and (13). On the other hand, the function

$$ \psi_{\varepsilon}(y)=\psi \biggl( \frac{y-x}{\varepsilon} \biggr), \quad y\in \mathbb {R}, $$

belongs to \(\mathcal{L}_{\varepsilon}(\mathbb {R})\) and satisfies \(\psi _{\varepsilon}(x)=0\) and \(\omega^{2} (\psi_{\varepsilon};\varepsilon)=1\), as follows from (8). This, together with (28), shows that \(\psi_{\varepsilon}\) is a maximal function in \(\mathcal {L}_{\varepsilon}(\mathbb {R})\), i.e.,

$$ \sup_{g\in\mathcal{L}_{\varepsilon}(\mathbb {R})} \frac {|Eg(Y)-g(x)|}{\omega^{2} (g;\varepsilon)}=E \psi \biggl( \frac {Y-x}{\varepsilon} \biggr). $$

To prove the remaining statements, we follow the same steps as those in the proof of Theorem 3.1. Specifically, let \(f\in\mathcal {C}(\mathbb {R})\) and let \(\tilde{f}\in\mathcal{L}_{\varepsilon}(\mathbb {R})\) be such that

$$ \tilde{f}(x_{i})=f(x_{i}), \quad x_{i}=x+ \varepsilon i, i \in \mathbb {Z}. $$

Looking at (22), we have in this case

$$ \sup_{x_{i}\leq y\leq x_{i+1}} \bigl\vert f(y)-\tilde{f}(y)\bigr\vert \leq\omega^{2} \biggl( f;\frac{\varepsilon}{2} \biggr) , \quad i\in \mathbb {Z}. $$
(29)

In the same way, inequality (24) becomes \(|\delta c_{i}|\leq \varepsilon^{-1}\omega^{2} (f;\varepsilon)\), \(i\in \mathbb {Z}\), which implies that

$$ \bigl\vert E\tilde{f}(Y)-f(x)\bigr\vert \leq E \psi \biggl( \frac{Y-x}{\varepsilon} \biggr) \omega^{2} (f;\varepsilon). $$
(30)

Therefore, inequality (27) readily follows from (29) and (30). Finally, if \(f\in\mathcal{C}_{cx}(\mathbb {R})\), we have from (30)

$$ \bigl\vert Ef(Y)-f(x)\bigr\vert \leq\bigl\vert E\tilde{f}(Y)-f(x)\bigr\vert \leq E \psi \biggl(\frac {Y-x}{\varepsilon} \biggr) \omega^{2} (f; \varepsilon). $$

This, together with the fact that \(\psi\in\mathcal{C}_{cx}(\mathbb {R})\), shows the second equality in (26). The proof is complete. □

4 The Weierstrass operator

We illustrate Theorem 3.2 by considering the classical Weierstrass operators \((W_{n}, n=1,2,\ldots)\), which allow for the following probabilistic representation

$$ W_{n} f(x)= Ef \biggl( x+ \frac{Z}{\sqrt{n}} \biggr)= \int_{\mathbb {R}} f \biggl( x+ \frac{\theta}{\sqrt{n}} \biggr)\rho( \theta) \, d\theta , \quad x\in \mathbb {R}, n=1,2,\ldots, $$

where \(f\in\mathcal{C}(\mathbb {R})\) and Z is a random variable having the standard normal density ρ and the distribution function Φ, respectively, defined by

$$ \rho(\theta)=\frac{1}{\sqrt{2\pi}}e^{-\theta^{2}/2}, \quad \theta\in \mathbb {R}; \qquad \Phi(u)= \int_{-\infty}^{u} \rho(\theta)\, d\theta ,\quad u\in \mathbb {R}. $$

Also, we consider the constant

$$ K=\frac{1}{\sqrt{2\pi}}+2\sum_{i=1}^{\infty}\bigl(\rho(i)-i\bigl(1-\Phi (i)\bigr)\bigr)=0.58333\ldots, $$
(31)

as follows from numerical computations.

Corollary 4.1

Let \(x\in \mathbb {R}\), \(n=1,2,\ldots\) , and let ψ and K be as in (13) and (31), respectively. Then,

$$ \sup_{g\in\mathcal{L}_{1/\sqrt{n}}(\mathbb {R})} \frac{|W_{n} g(x)-g(x)|}{\omega^{2}(g;1/\sqrt{n})}=\sup_{f\in\mathcal{C}_{cx}(\mathbb {R})} \frac{|W_{n} f(x)-f(x)|}{\omega^{2} (f;1/\sqrt{n})}=E\psi(Z)=K. $$

If \(f\in\mathcal{C}(\mathbb {R})\), then

$$ \bigl\vert W_{n} f(x)-f(x)\bigr\vert \leq K \omega^{2} (f;1/\sqrt{n})+\omega^{2} (f;1/2\sqrt{n}). $$

If \(f\in L_{\alpha}(\mathbb {R})\), for some \(\alpha\in(0,2]\), then

$$ \bigl\vert W_{n} f(x)-f(x)\bigr\vert \leq \bigl(K+2^{-\alpha}\bigr)n^{-\alpha/2}. $$
(32)

Proof

Corollary 4.1 is a direct consequence of Theorem 3.2 by choosing \(\varepsilon=1/\sqrt{n}\) and \(Y=x+Z/\sqrt{n}\). It remains to show that \(E\psi(Z)=K\), as defined in (31). To this end, note that

$$ E(Z-i)_{+}= \int_{i}^{\infty}(\theta-i)\rho(\theta) \, d\theta= \rho (i)-i\bigl(1-\Phi(i)\bigr),\quad i=0,1,\ldots. $$

We therefore have from (13) and the symmetry of Z,

$$ E\psi(Z)=EZ_{+} +2 \sum_{i=1}^{\infty}E(Z-i)_{+}= \frac{1}{\sqrt{2\pi}}+2\sum_{i=1}^{\infty}\bigl( \rho(i)-i\bigl(1-\Phi(i)\bigr)\bigr)=K. $$

This completes the proof. □

It should be observed that the constant in (32) is less or equal than 1 if

$$ \alpha\geq-\frac{\log(1-K)}{\log2}=1.26302\ldots. $$

5 The Szàsz-Mirakyan operator

In this section, we will apply Theorem 3.1 to the classical Szàsz-Mirakyan operators \((L_{n}, n=1,2,\ldots)\). From a probabilistic viewpoint, such operators can be represented as follows. Let \((N_{\lambda}, \lambda\geq0)\) be the standard Poisson process, i.e., a stochastic process starting at the origin, having independent stationary increments and nondecreasing paths such that

$$ P(N_{\lambda}=k)=e^{-\lambda}\frac{\lambda^{k}}{k!},\quad k=0,1,\ldots ,\, \lambda\geq0. $$
(33)

Let \(n=1,2,\ldots\) and \(x\geq0\). Thanks to (33), the Szàsz-Mirakyan operator \(L_{n}\) can be written as

$$ L_{n}f(x)=\sum_{k=0}^{\infty}f \biggl( \frac{k}{n} \biggr) e^{-nx} \frac {(nx)^{k}}{k!}=Ef \biggl( \frac{N_{nx}}{n} \biggr), $$
(34)

where \(f\in\mathcal{C}([0,\infty))\). It is well known that

$$ E \biggl( \frac{N_{nx}}{n} \biggr)=x, \qquad E \biggl( \frac {N_{nx}}{n}-x \biggr)^{2}=\frac{x}{n}. $$
(35)

Accordingly, we choose in this case (recall (9), (10), and (12), as well as the subsequent comments)

$$ \sigma_{\varepsilon}(y)=\min \biggl( \frac{y}{\varepsilon},\sigma (y) \biggr),\qquad \sigma(y)=\sqrt{y},\quad y\geq0; \qquad \varepsilon = \frac{1}{\sqrt{n}}, \qquad a_{\varepsilon}=\frac{1}{n}. $$
(36)

As follows from (12), the set of nodes \(\mathcal{N}_{\varepsilon}=\{x_{i}, i\geq-(m+1)\}\), for \(\varepsilon=1/\sqrt{n}\), is given by

$$ x_{-(m+1)}=0, \qquad x_{0}=x, \qquad x_{i+1}-x_{i}=\sqrt{\frac {x_{i+1}}{n}},\quad i\geq-m, $$
(37)

\(x_{-m}\) being the unique node in the interval \((0,1/n]\). In order to apply Theorem 3.1 to Szàsz-Mirakyan operators, we need to estimate the quantities \(\delta_{\varepsilon}\) and \(Eg_{\varepsilon}(N_{nx}/n)\), for \(\varepsilon=1/\sqrt{n}\). In this regard, the following two auxiliary results will be useful.

Lemma 5.1

If \(n=1,2,\ldots\) , then \(\delta_{1/\sqrt{n}}\leq1/\sqrt{2n}\), where \(\delta_{\varepsilon}\) is defined in (18).

Proof

Denote

$$ q(x_{i})=\frac{x_{i+1}-x_{i}}{2\sigma ( (x_{i}+x_{i+1})/2 )}=\sqrt{\frac{x_{i+1}}{2n(x_{i}+x_{i+1})}}, \quad i\geq-m, $$
(38)

where the last equality follows from (37). Observe that

$$ q(x_{-(m+1)})=\frac{x_{-m}}{\sqrt{2x_{-m}}}< \frac{1}{\sqrt{2n}}. $$

For \(i\geq-m\), we have from (37) and (38)

$$ q^{2}(x_{i})= \frac{x_{i+1}}{2n ( 2x_{i+1}-\sqrt{x_{i+1}/n} )}\leq\frac{1}{2n}, $$

since \(x_{i+1}\geq1/n\). This, together with (18), completes the proof. □

With the notations given in (36) and (37), we state the following lemma.

Lemma 5.2

Let \(n=1,2,\ldots\) and \(x>0\). If \(x>a_{\varepsilon}=1/n\), then:

  1. (a)

    For any \(-s=-1,\ldots, -m+1\), we have

    $$ \sum_{i=-m}^{-(s+1)}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{-}\leq P\bigl(N_{nx}\leq \lceil nx_{-s}-1 \rceil\bigr)\leq\frac{nx}{ (n(x-x_{-s})+1 )^{2}} . $$
  2. (b)

    For any \(l=1,2,\ldots\) , we have

    $$ \sum_{i=l+1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+}\leq P\bigl(N_{nx}\geq \lfloor nx_{l} \rfloor\bigr)\leq\frac{x}{n(x_{l}-x)^{2}} . $$
  3. (c)

    If \(x_{-1}\geq a_{\varepsilon}=1/n\), then

    $$\begin{aligned}& \frac{1}{2\varepsilon\sigma_{\varepsilon}(x)} E\biggl\vert \frac {N_{nx}}{n}-x\biggr\vert + \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-1})} E \biggl( \frac{N_{nx}}{n}-x_{-1} \biggr)_{-} \\& \quad {} +\sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+}\leq \frac{c+1}{4}+\frac {1}{4(c+1)}, \quad c=\sqrt{\frac{x}{x_{-1}}} . \end{aligned}$$
    (39)

    If \(x\leq a_{\varepsilon}=1/n\), then

    $$ \sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+}\leq P(N_{nx}\geq1). $$
    (40)

Proof

Let \(\lambda>0\). We first claim that

$$ E(N_{\lambda}-u)_{-}\leq u P\bigl(N_{\lambda}=\lceil u-1 \rceil\bigr), \quad u\leq\lambda, $$
(41)

as well as

$$ E(N_{\lambda}-u)_{+}\leq\lambda P\bigl(N_{\lambda}=\lfloor u \rfloor\bigr),\quad u\geq \lambda. $$
(42)

In fact, it follows from (33) that \(kP (N_{\lambda}=k)=\lambda P(N_{\lambda}=k-1)\), \(k=1,2,\ldots\) . Therefore,

$$\begin{aligned} E(N_{\lambda}-u)_{-} =&\sum_{k< u} (u-k) P(N_{\lambda}=k)=uP (N_{\lambda}< u)-\lambda P(N_{\lambda}< u-1) \\ =& u P\bigl(N_{\lambda}\in [u-1,u )\bigr)+(u-\lambda) P(N_{\lambda}< u-1)\leq u P\bigl(N_{\lambda}= \lceil u-1 \rceil\bigr), \end{aligned}$$

thus showing (41). Inequality (42) follows in a similar way. Second, we claim that

$$ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{-}\leq n (x_{i+1}-x_{i}) P\bigl(N_{nx}=\lceil nx_{i}-1 \rceil\bigr), $$
(43)

for \(i=-m,\ldots, -(s+1)\). Actually, suppose that \(i=-m+1,\ldots, -(s+1)\). By (36), (37), and (41), the left-hand side in (43) is bounded above by

$$ \frac{\varepsilon\sigma_{\varepsilon}(x_{i})}{n(\varepsilon\sigma _{\varepsilon}(x_{i}))^{2}} E(N_{nx}-nx_{i})_{-}\leq n (x_{i+1}-x_{i})P\bigl(N_{nx}=\lceil nx_{i}-1 \rceil\bigr). $$

As seen in (37), we have \(x_{-m}\leq a_{\varepsilon}= 1/n < x_{-m+1}\). Therefore,

$$\begin{aligned}& \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-m})} E \biggl( \frac {N_{nx}}{n}-x_{-m} \biggr)_{-} \\& \quad = P(N_{nx}=0)\leq\sqrt{nx_{-m+1}} P (N_{nx}=0) \\& \quad \leq n(x_{-m+1}-x_{-m})P\bigl(N_{nx}=\lceil nx_{-m}-1\rceil\bigr). \end{aligned}$$

Claim (43) is shown. On the other hand, it follows from (33) that the function \(h(k)=P(N_{\lambda}=k)\) is nondecreasing for \(0\leq k\leq\lfloor\lambda\rfloor\). This implies that

$$\begin{aligned}& \sum_{i=-m}^{-(s+1)}(nx_{i+1}-n x_{i})P\bigl(N_{nx}=\lceil nx_{i}-1 \rceil\bigr) \\& \quad \leq \int_{0}^{nx_{-s}}P \bigl(N_{nx}=\lceil u-1 \rceil\bigr) \,du \\& \quad =\sum_{k=0}^{\lceil nx_{-s}-1 \rceil-1}P(N_{nx}=k)+ \int_{\lceil nx_{-s}-1\rceil}^{nx_{-s}} P\bigl(N_{nx}=\lceil u-1 \rceil\bigr) \,du \\& \quad \leq P\bigl(N_{nx}\leq\lceil nx_{-s}-1 \rceil\bigr). \end{aligned}$$
(44)

On the other hand, by Markov’s inequality, we have

$$\begin{aligned} P\bigl(N_{nx}\leq\lceil nx_{-s}-1 \rceil\bigr) \leq& P \bigl(N_{nx}-nx\leq n(x_{-s}-x)-1\bigr) \\ \leq&\frac{E(N_{nx}-nx)^{2}}{(n(x-x_{-s})+1)^{2}}=\frac{nx}{(n(x-x_{-s})+1)^{2}}, \end{aligned}$$

where the last equality follows from (35). This, together with (43) and (44), shows part (a). Part (b) follows in a similar manner, using (42) instead of (41).

To show part (c), note that \(\sigma_{\varepsilon}(x_{i})=\sigma(x_{i})\), \(i\geq-1\), because \(x_{-1}\geq a_{\varepsilon}=1/n\). Consider the function

$$ h_{\varepsilon}(y)= \frac{1}{2\varepsilon\sigma(x)} \biggl\vert \frac {y}{n}-x\biggr\vert + \frac{1}{\varepsilon\sigma(x_{-1})} \biggl( \frac {y}{n}-x_{-1} \biggr)_{-}+ \sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma (x_{i})} \biggl( \frac{y}{n}-x_{i} \biggr)_{+}, \quad y \geq0. $$

If \(y< nx\), it is easily checked from (36) and (37) that

$$ h_{\varepsilon}(y)=\frac{1}{2}\biggl\vert \frac{y-nx}{\sqrt{nx}}\biggr\vert +c \biggl(\biggl\vert \frac{y-nx}{\sqrt{nx}}\biggr\vert -1 \biggr)_{+}\leq\varphi _{c} \biggl( \biggl\vert \frac{y-nx}{\sqrt{nx}} \biggr\vert \biggr), $$
(45)

where the last inequality follows from Lemma 5.1. Similarly, if \(y\geq nx\) and \(i\geq1\), we have

$$ \frac{1}{\varepsilon\sigma(x_{i})} \biggl( \frac{y}{n}-x_{i} \biggr)_{+}=\sqrt{ \frac{x}{x_{i}}} \biggl( \frac{y-nx}{\sqrt{nx}}-\frac{\sqrt {x_{1}}+\cdots+\sqrt{x_{i}}}{\sqrt{x}} \biggr)_{+}\leq \biggl( \biggl\vert \frac {y-nx}{\sqrt{nx}} \biggr\vert -i \biggr)_{+}, $$

thus implying, by virtue of Lemma 5.1, that

$$ h_{\varepsilon}(y)\leq\frac{1}{2}\biggl\vert \frac{y-nx}{\sqrt{nx}} \biggr\vert +\sum_{i=1}^{\infty}\biggl( \biggl\vert \frac{y-nx}{\sqrt {nx}}\biggr\vert -1 \biggr)_{+}\leq \varphi_{c} \biggl(\biggl\vert \frac{y-nx}{\sqrt {nx}} \biggr\vert \biggr). $$
(46)

We therefore have from (35), (45), and (46)

$$ Eh_{\varepsilon}(N_{nx})\leq E\varphi_{c} \biggl( \biggl\vert \frac {N_{nx}-nx}{\sqrt{nx}} \biggr\vert \biggr)=\frac{c+1}{4}+ \frac{1}{4(c+1)}. $$

This shows (39). Finally, we will show inequality (40). From (36) and (42), we get

$$\begin{aligned}& \sum_{i=1}^{\infty}\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{+} \\& \quad \leq\sum_{i=1}^{\infty}\frac {x}{\varepsilon\sqrt{x_{i}}} P\bigl(N_{nx}=\lfloor nx_{i} \rfloor\bigr)=\sum_{i=1}^{\infty}\frac{nx}{\sqrt{nx_{i}}}P \bigl(N_{nx}=\lfloor nx_{i} \rfloor \bigr) \\& \quad \leq\sum _{i=1}^{\infty}P\bigl(N_{nx}=\lfloor nx_{i} \rfloor\bigr)\leq P(N_{nx \geq1}), \end{aligned}$$

since, by assumption and (37), we have \(nx\leq1< nx_{i}\) and \(nx_{i+1}-nx_{i} = \sqrt{nx_{i+1}}>1\), \(i=1,2,\ldots\) . This shows (40) and completes the proof. □

Denote

$$ K_{n}(x)=Eg_{1/\sqrt{n}} \biggl( \frac{N_{nx}}{n} \biggr),\quad n=1,2,\ldots,\, x>0, $$
(47)

where \(g_{\varepsilon}\) is defined in (17). For the Szàsz-Mirakyan operator defined in (34), we enunciate the following result.

Theorem 5.3

Let \(n=1,2,\ldots\) , \(x>0\), and \(\sigma(y)=\sqrt{y}\), \(y\geq0\). Then:

  1. (a)

    If \(f\in\mathcal{C}([0,\infty))\), then

    $$ \bigl\vert L_{n} f(x)-f(x)\bigr\vert \leq K_{n}(x) \omega_{\sigma}^{2} \biggl( f;\frac{1}{\sqrt {n}} \biggr)+ \bigl(1+K_{n}(x)\bigr)\omega_{\sigma}^{2} \biggl( f; \frac{1}{\sqrt {2n}} \biggr). $$
  2. (b)

    If \(f\in\mathcal{C}_{cx}([0,\infty))\), then

    $$ \bigl\vert L_{n} f(x)-f(x)\bigr\vert \leq K_{n}(x) \biggl( \omega_{\sigma}^{2} \biggl( f;\frac {1}{\sqrt{n}} \biggr)+ \omega_{\sigma}^{2} \biggl( f;\frac{1}{\sqrt {2n}} \biggr) \biggr). $$
  3. (c)

    If \(f\in L_{\alpha}([0,\infty))\), for some \(\alpha\in (0,2]\), then

    $$ \bigl\vert L_{n}f(x)-f(x)\bigr\vert \leq \bigl( K_{n}(x)+2^{-\alpha/2}\bigl(1+K_{n}(x)\bigr) \bigr)n^{-\alpha/2}. $$

    The upper constants \(K_{n}(x)\) defined in (47) satisfy the following properties:

    $$ \lim_{n\to\infty} K_{n}(x)=K=0.58333\ldots, \quad x>0, $$
    (48)

    where K is the same constant as that in (31), as well as

    $$ 1\leq\sup\bigl\{ K_{n}(x): n=1,2,\ldots,\, x>0\bigr\} \leq1+\frac{1}{5}. $$
    (49)

Proof

Parts (a)-(c) are direct consequences of Theorem 3.1, by choosing \(\varepsilon=1/\sqrt{n}\) and \(Y=N_{nx}/n\), taking into account that \(\delta_{1/\sqrt{n}}\leq1/\sqrt{2n}\), as follows from Lemma 5.1.

To show (48), fix \(x>0\) and \(0<\tau<x\). Choose n large enough so that \(a_{\varepsilon}=1/n< x-\tau\). Let \(s=1,2,\ldots\) and \(l=2,3,\ldots\) be such that

$$ x_{-(s+1)}< x-\tau\leq x_{-s}, \qquad x_{l}\leq x+\tau< x_{l+1}. $$
(50)

Let \(i=1,\ldots,l-1\). From (36), (37), and (50), we see that

$$ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{+}=\sqrt{ \frac{x}{x_{i}}}E \biggl( \biggl\vert \frac {N_{nx}-nx}{\sqrt{nx}}\biggr\vert - \bar{x}_{i} \biggr)_{+}, \quad \bar {x}_{i}= \frac{\sqrt{x_{i}}+\cdots+\sqrt{x_{i}}}{\sqrt{x}}. $$

Again by (50), this implies that

$$\begin{aligned}& \sqrt{\frac{x}{x+\tau}} E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert -i \sqrt{\frac{x+\tau}{x}} \biggr)_{+} \\& \quad \leq\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{+}\leq E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert -i \biggr)_{+}. \end{aligned}$$
(51)

Similarly, we have, for \(i=-s,\ldots,-1\),

$$\begin{aligned} E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt{nx}}\biggr\vert -i \biggr)_{+} &\leq\frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})}E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{-} \\ &\leq\sqrt{\frac{x}{x-\tau}}E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert -i \sqrt{\frac{x-\tau}{x}} \biggr)_{+}. \end{aligned}$$
(52)

On the other hand, by the central limit theorem for the standard Poisson process, the random variable \((N_{nx}-nx)/\sqrt{nx}\) converges in law to the standard normal random variable Z, as \(n\to\infty\). Therefore, by the Helly-Bray theorem (cf. Billingsley [21], pp.335-338), we get from Lemma 5.2, (51), and (52)

$$\begin{aligned}& E \frac{|Z|}{2}+ \sqrt{\frac{x}{x+\tau}}\sum _{i=1}^{\infty}E \biggl( |Z|-i \sqrt{\frac{x+\tau}{x}} \biggr)_{+}+ \sum_{i=-\infty}^{-1} E\bigl(\vert Z \vert -i\bigr)_{+} \\& \quad \leq\varliminf_{n\to\infty} K_{n}(x)\leq\varlimsup_{n\to\infty} K_{n}(x) \\& \quad \leq E \frac{|Z|}{2}+\sum_{i=1}^{\infty}E\bigl(\vert Z\vert -i\bigr)_{+}+\sqrt{\frac{x}{x-\tau}}\sum _{i=-\infty }^{-1}E \biggl( |Z|-i\sqrt{\frac{x-\tau}{x}} \biggr)_{+}. \end{aligned}$$

Thus, (48) follows from (31) and Corollary 4.1 by letting \(\tau\to0\) in these last inequalities.

To show (49), let d be the largest solution to the equation

$$ d-\sqrt{d}-\sqrt{d-\sqrt{d}}=1,\quad d=\frac{4+\sqrt{5}+\sqrt{7+2\sqrt {5}}}{2}=4.811561 \ldots $$
(53)

and define the points

$$ x^{\star}=\frac{d}{n}, \qquad x^{\star}_{-1}=x^{\star}- \varepsilon\sigma \bigl(x^{\star}\bigr)=\frac{d-\sqrt{d}}{n}, \qquad x^{\star}_{-2}=x^{\star}_{-1}-\varepsilon\sigma \bigl(x^{\star}_{-1}\bigr)=\frac{1}{n}. $$
(54)

We distinguish the following cases:

Case 1. \(0< x\leq x^{\star}_{-2}=1/n\). Since \(EN_{nx}/n=x\), we have from (36)

$$ \frac{1}{2\varepsilon\sigma_{\varepsilon}(x)} E \biggl\vert \frac {N_{nx}}{n}-x\biggr\vert = \frac{1}{2x} E\biggl\vert \frac{N_{nx}}{n}-x \biggr\vert = \frac{1}{x} E \biggl( \frac{N_{nx}}{n}-x \biggr)_{-}=P(N_{nx}=0). $$

We therefore have from (40) and (47)

$$ e^{-nx}=P(N_{nx}=0)\leq K_{n}(x)\leq P(N_{nx}=0)+P(N_{nx}\geq1)=1. $$
(55)

Letting \(x\to0\) in (55), we get the first inequality in (49).

Case 2. \(x^{\star}_{-2}=1/n< x\leq x^{\star}_{-1}\). From (54), we see that \(x_{-1}\leq x^{\star}_{-1}-\varepsilon\sigma (x^{\star}_{-1})=1/n\), thus implying that

$$ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-1})} E \biggl( \frac {N_{nx}}{n}-x_{-1} \biggr)_{-}=P(N_{nx}=0)\leq P(N_{1}=0)=e^{-1}. $$

Hence, we have from (47), (37), (51), and Lemma 5.1

$$\begin{aligned} K_{n}(x) \leq& e^{-1}+ \frac{1}{2\varepsilon\sigma_{\varepsilon}(x)}E \biggl\vert \frac{N_{nx}}{n}-x\biggr\vert +\sum_{i=1}^{\infty}\frac {1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac {N_{nx}}{n}-x_{i} \biggr)_{+} \\ \leq& e^{-1}+\frac{1}{2} E \biggl\vert \frac{N_{nx}-nx}{\sqrt{nx}} \biggr\vert +\sum_{i=1}^{\infty}E \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt{nx}} \biggr\vert -i \biggr)_{+}\leq e^{-1}+ E \psi \biggl(\biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}} \biggr\vert \biggr) \\ \leq& e^{-1}+E\varphi_{1} \biggl(\biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert \biggr)=e^{-1}+\frac{1}{2}+ \frac{1}{8}< 1, \end{aligned}$$

where we have used (35) in the last equality.

Case 3. \(x^{\star}_{-1}< x\leq x^{\star}\). Again by (54), we see that \(x_{-2}\leq1/n < x_{-1}\). Thus,

$$\begin{aligned}& \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-2})} E \biggl( \frac {N_{nx}}{n}-x_{-2} \biggr)_{-}+ \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{-1})}E \biggl( \frac{N_{nx}}{n}-x_{-1} \biggr)_{-} \\& \quad =P(N_{nx}=0)+\sqrt{\frac{n}{x_{-1}}} \biggl( x_{-1} P(N_{nx}=0)+ \biggl( x_{-1}-\frac{1}{n} \biggr) P(N_{nx}=1) \biggr). \end{aligned}$$
(56)

Set \(\lambda=nx\) and note that \(\lambda> nx^{\star}_{-1}=d-\sqrt{d}\), as follows from (54). Since \(nx_{-1}=nx-\sqrt{nx}=\lambda-\sqrt {\lambda}\), the right-hand side in (56) becomes after some simple computations

$$ \biggl( 1+\lambda-\frac{\sqrt{\lambda}}{\sqrt{\lambda-\sqrt{\lambda }}} \biggr) e^{-\lambda} \leq(1+\lambda)e^{-\lambda}\leq(1+d-\sqrt {d})e^{-(d-\sqrt{d})}\leq0.264, $$
(57)

as follows from (53). As in Case 2, we have from (56) and (57)

$$ K_{n}(x)\leq0.264 + E\varphi_{1} \biggl( \biggl\vert \frac{N_{nx}-nx}{\sqrt {nx}}\biggr\vert \biggr)= 0.264+\frac{1}{2}+ \frac{1}{8}< 1. $$

Case 4. \(x^{\star}< x\). We claim that

$$ \sum_{i=-m}^{-2} \frac{1}{\varepsilon\sigma_{\varepsilon}(x_{i})} E \biggl( \frac{N_{nx}}{n}-x_{i} \biggr)_{-}\leq P \bigl(N_{nx}\leq\lceil nx_{-1}-1 \rceil\bigr)\leq \frac{1}{2}. $$
(58)

Actually, the first inequality in (58) readily follows from Lemma 5.2(a). As far as the second one is concerned, observe that \(nx_{-1}=nx-\sqrt{nx}< nx-1\), which implies that \(\lceil nx_{-1}-1 \rceil\leq\lceil nx-2\rceil\leq\lfloor nx \rfloor-1\). Therefore,

$$ P\bigl(N_{nx}\leq\lceil nx_{-1}-1 \rceil\bigr) \leq P\bigl(N_{nx}\leq\lfloor nx \rfloor-1\bigr)\leq P \bigl(N_{\lfloor nx \rfloor}\leq\lfloor nx \rfloor-1\bigr), $$
(59)

since \(N_{\lfloor nx \rfloor}\leq N_{nx}\). On the other hand, it has been shown in [22] that the sequence \((P(N_{k}\leq k-1), k=1,2,\ldots)\) strictly increases to \(1/2\). This, together with (59), shows claim (58).

Finally, it is easy to see that the function \(\sqrt{x/x_{-1}}\), \(x\geq x^{\star}\), strictly decreases. It therefore follows from (54) that

$$ \sqrt{\frac{x}{x_{-1}}}\leq\sqrt{\frac{x^{\star}}{x_{-1}^{\star}}}= \sqrt {\frac{d}{d-\sqrt{d}}}=:c^{\star}. $$
(60)

We thus have from (58), (60), and Lemma 5.2(c)

$$ K_{n}(x)\leq\frac{1}{2}+\frac{c^{\star}+1}{4}+ \frac{1}{4(c^{\star}+1)}=1.195045\ldots\leq1+\frac{1}{5}. $$

The proof is complete. □

As mentioned in the Introduction, Theorem 5.3 illustrates that the estimates of the general constants in (3) and (4) may be quite different. Such estimates mainly depend on two facts: the set of functions under consideration (parts (a)-(c) in Theorem 5.3), and the kind of estimate we are interested in, namely, pointwise estimate or uniform estimate (see equations (48) and (49), respectively).

References

  1. Ditzian, Z, Totik, V: Moduli of Smoothness. Springer Series in Computational Mathematics, vol. 9. Springer, New York (1987). doi:10.1007/978-1-4612-4778-4.

    MATH  Google Scholar 

  2. Ditzian, Z, Ivanov, KG: Strong converse inequalities. J. Anal. Math. 61, 61-111 (1993). doi:10.1007/BF02788839

    Article  MathSciNet  MATH  Google Scholar 

  3. Totik, V: Strong converse inequalities. J. Approx. Theory 76(3), 369-375 (1994). doi:10.1006/jath.1994.1023

    Article  MathSciNet  MATH  Google Scholar 

  4. Sangüesa, C: Lower estimates for centered Bernstein-type operators. Constr. Approx. 18(1), 145-159 (2002)

    MathSciNet  MATH  Google Scholar 

  5. Guo, S, Qi, Q, Liu, G: The central approximation theorems for Baskakov-Bézier operators. J. Approx. Theory 147(1), 112-124 (2007). doi:10.1016/j.jat.2005.02.010

    Article  MathSciNet  MATH  Google Scholar 

  6. Finta, Z: Direct and converse results for q-Bernstein operators. Proc. Edinb. Math. Soc. (2) 52(2), 339-349 (2009). doi:10.1017/S0013091507001228

    Article  MathSciNet  MATH  Google Scholar 

  7. Özarslan, MA, Vedi, T: q-Bernstein-Schurer-Kantorovich operators. J. Inequal. Appl. 2013, 444 (2013). doi:10.1186/1029-242X-2013-444

    Article  MathSciNet  MATH  Google Scholar 

  8. Liu, G, Yang, X: On the approximation for generalized Szász-Durrmeyer type operators in the space \(L_{p}[0,\infty)\). J. Inequal. Appl. 2014, 447 (2014). doi:10.1186/1029-242X-2014-447

    Article  MATH  Google Scholar 

  9. Gonska, HH: Two problems on best constants in direct estimates. In: Ditzian, Z, Meir, A, Riemenschneider, SD, Sharma, A (eds.) Problem Section of Proc. Edmonton Conf. Approximation Theory, p. 194. Am. Math. Soc., Providence (1983)

    Google Scholar 

  10. Păltănea, R: Best constants in estimates with second order moduli of continuity. In: Approximation Theory (Witten, 1995). Math. Res., vol. 86, pp. 251-275. Akademie Verlag, Berlin (1995)

    Google Scholar 

  11. Păltănea, R: Approximation Theory Using Positive Linear Operators. Birkhäuser Boston, Boston (2004). doi:10.1007/978-1-4612-2058-9.

    MATH  Google Scholar 

  12. Adell, JA, Sangüesa, C: Real inversion formulas with rates of convergence. Acta Math. Hung. 100(4), 293-302 (2003). doi:10.1023/A:1025139103991

    Article  MathSciNet  MATH  Google Scholar 

  13. Gonska, H, Păltănea, R: Quantitative convergence theorems for a class of Bernstein-Durrmeyer operators preserving linear functions. Ukr. Math. J. 62(7), 1061-1072 (2010). doi:10.1007/s11253-010-0413-8

    Article  MathSciNet  MATH  Google Scholar 

  14. Adell, JA, Sangüesa, C: Upper estimates in direct inequalities for Bernstein-type operators. J. Approx. Theory 109(2), 229-241 (2001). doi:10.1006/jath.2000.3547

    Article  MathSciNet  MATH  Google Scholar 

  15. Gavrea, I, Gonska, H, Păltănea, R, Tachev, G: General estimates for the Ditzian-Totik modulus. East J. Approx. 9(2), 175-194 (2003)

    MathSciNet  MATH  Google Scholar 

  16. Bustamante, J: Estimates of positive linear operators in terms of second-order moduli. J. Math. Anal. Appl. 345(1), 203-212 (2008). doi:10.1016/j.jmaa.2008.04.017

    Article  MathSciNet  MATH  Google Scholar 

  17. Adell, JA, Sangüesa, C: A strong converse inequality for gamma-type operators. Constr. Approx. 15(4), 537-551 (1999). doi:10.1007/s003659900121

    Article  MathSciNet  MATH  Google Scholar 

  18. Adell, JA, Lekuona, A: Quantitative estimates for positive linear operators in terms of the usual second modulus. Abstr. Appl. Anal. 2015, Article ID 915358 (2015). doi:10.1155/2015/915358

    Article  MathSciNet  Google Scholar 

  19. Gonska, HH, Tachev, GT: On the constants in \(\omega^{\phi}_{2}\)-inequalities. In: Proceedings of the Fourth International Conference on Functional Analysis and Approximation Theory (Potenza, 2000), vol. II, pp. 467-477 (2002)

    Google Scholar 

  20. Gonska, HH, Tachev, GT: The second Ditzian-Totik modulus revisited: refined estimates for positive linear operators. Rev. Anal. Numér. Théor. Approx. 32(1), 39-61 (2003)

    MATH  Google Scholar 

  21. Billingsley, P: Probability and Measure, 3rd edn. Wiley Series in Probability and Mathematical Statistics. Wiley, New York (1995). A Wiley-Interscience publication

    MATH  Google Scholar 

  22. Adell, JA, Jodrá, P: The median of the Poisson distribution. Metrika 61(3), 337-346 (2005). doi:10.1007/s001840400350

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the referees for their careful reading of the manuscript and for their suggestions, which greatly improved the final outcome. The authors are partially supported by Research Projects DGA (E-64), MTM2015-67006-P, and by FEDER funds.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José A Adell.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adell, J.A., Lekuona, A. Towards the best constant in front of the Ditzian-Totik modulus of smoothness. J Inequal Appl 2016, 137 (2016). https://doi.org/10.1186/s13660-016-1078-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1078-0

MSC

Keywords