Skip to main content

On Shepard–Gupta-type operators

Abstract

A Gupta-type variant of Shepard operators is introduced and convergence results and pointwise and uniform direct and converse approximation results are given. An application to image compression improving a previous algorithm is also discussed.

1 Introduction

In the last decades Shepard operators have been object of several papers, thanks to their properties interesting in classical approximation theory and in scattered data interpolation problems. In particular Shepard operators are linear, positive, rational operators, of interpolatory-type, preserving constants and achieving approximation results not possible by polynomials. Pointwise and uniform approximation error estimates, converse results, bridge theorems, saturation statements, simultaneous approximation results can be found for example in [17]. Applications of Shepard operators to scattered data interpolation problems, image compression and CAGD can be found for example in [817].

On the other hand Gupta introduced a variant of classical Bernstein operator and similar modifications of well-known positive operators of Bernstein-type were studied by him, his collaborators and other researchers (see e.g. [1825]).

It was an open problem to consider variants of Gupta-type for Shepard operators.

The aim of the present paper is to give a positive answer to the above question, introducing a generalization of Gupta-type of Shepard operator depending on a real positive parameter. Convergence results and uniform and pointwise approximation error estimates for such operator are given in Theorems 2.12.2 in Sect. 2.1. As a particular case, we obtain the first pointwise approximation error estimate for the original Shepard operator on equispaced mesh. Theorem 2.3 settles converse results and saturation statements for our operator. The corresponding proofs are based on direct estimates for the Shepard–Gupta-type operators.

In Sect. 2.2 an application to image compression is examined improving an analogous algorithm in [9] and numerical experiments confirming the outperformance of such technique compared with other algorithms are also shown.

2 Results

For \(n \in N\) consider the nodes matrix \(X= ( x_{n,k}= x_{k}=k/n, k=0,\ldots ,n ) \subseteq [0,1]\). Then, for any function \(f \in C([0,1])\) we denote by \(S_{n}^{s}\) the Shepard operator defined by

$$ S_{n}^{s} ( X;f;x ) =S_{n}^{s}(f;x)= \frac{ \sum _{k=0}^{n} \frac{f( x_{k})}{\vert x-x_{k}\vert ^{s} } }{ \sum _{k=0}^{n} \frac{1}{\vert x-x_{k}\vert ^{s}} }, $$
(1)

with \(x \in [0,1]\) and \(s >0\) (cf. [26]). From (1) we deduce that \(S_{n}^{s}\) is a positive, linear operator, preserving constants, interpolating f at \(x_{k}\), \(k=0,\ldots ,n\), and \(S_{n}^{s}(f)\) is a rational function of degree \((sn,sn)\) for s even. Here we assume \(s>2\) because of theoretical complications for \(s\le 2\) (see, e.g., [3, 4]).

The approximation behavior of \(S_{n}^{s}\) operator is well known and direct and converse results, saturation statements and simultaneous approximation estimates not possible by polynomials and corresponding to several nodes meshes distributions can be found for example in [1, 2, 47, 13, 27]. Applications to scattered data interpolation problems, CAGD and image compression were also examined (see e.g. [816]).

On the other hand Gupta introduced variants of Bernstein-type operators, studying the approximation properties of such operators (see e.g. [1725]).

In the following subsection we extend such an approach to \(S_{n}^{s}\) and study Shepard–Gupta-type operators.

2.1 Approximation by Shepard–Gupta-type operators

For any \(\alpha \ge 1\) and \(s>2\) let

$$\begin{aligned} G_{n}^{\alpha ,s} ( X;f;x ) & = G_{n}^{\alpha ,s}(f;x) \\ & = \frac{ \sum _{k=0}^{n} f(x_{k}) [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }{ \sum _{k=0}^{n} [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }, \end{aligned}$$
(2)

with \(x \in [0,1]\). From the definition it follows immediately that \(G^{1,s}_{n}=S_{n}^{s}\), i.e. for \(\alpha =1\), we find back the original Shepard operator (1). Moreover, \(G_{n}^{\alpha ,s}\) is a positive, linear operator of interpolatory-type and is stable in the Fejér sense, i.e., \(\forall x \in [0,1]\),

$$ \min_{0\le x\le 1} \bigl\vert f(x)\bigr\vert \le \bigl\vert G_{n}^{\alpha ,s}(f;x) \bigr\vert \le \max_{0\le x\le 1} \bigl\vert f(x)\bigr\vert . $$

We remark that Gupta variants of Bernstein-type operators depend on a positive parameter, not appearing in the kernel basis; here the parameter α appears both in the kernel basis \(\vert x-x_{l} \vert ^{-s\alpha }\), both in the exponents in the inner summations at the r.h.s. in (2).

If we denote by \(x_{j}\) the closest knot to x, with \(x_{j} \le x \le x_{j+1}\), then \(f(x_{j})\) (and also of \(f(x_{ j+1 })\) if \(x=(x_{j}+x_{j+1})/2\)) influences \(G_{n}^{\alpha ,s}(f;x)\) in a small neighborhood of x strongly—the “strong local control property”—as a consequence of the large value of \(1/ {(x-x_{j})^{s\alpha }} \) in that range compared with the other terms. Consequently for n and s fixed and α increasing, \(G_{n}^{\alpha ,s}(f;x)\) tends continuously to the step function

$$ \mathbb{S}(x)= \textstyle\begin{cases} f(x_{j}), & x_{j} \le x < x_{j+1/2}; \\ \frac{f(x_{j})+f(x_{j+1})}{2}, & x =x_{j+1/2}; \\ f(x_{j+1}), & x_{j+1/2} < x \le x_{j+1}, \end{cases} $$
(3)

with \(x_{j+1/2}=(j+1/2)/n\). Analogously we can work for \(x_{j}\) the closest knot to x, with \(x_{j-1}\le x \le x_{j}\).

By such asymptotic behavior we can use the operator \(G_{n}^{\alpha ,s}\) to successfully compress images expressed by piecewise constants (see Sect. 2.2).

Now we show that we can use \(G_{n}^{\alpha ,s}\) to approximate functions from \(C([0,1])\). Indeed, let \(\Vert f \Vert \) be the usual supremum norm on \([0,1]\) of \(f \in C([0,1])\) and \(\omega (f)\) the usual modulus of continuity of f. Moreover, C, \(C_{1}\) are positive constants possibly having different values even in the same formula; we say that \(a \sim b\) iff \(\vert a/b \vert \le C\) and \(\vert b/a \vert \le C_{1}\).

Theorem 2.1

Let \(\alpha \ge 1\). Then, for any \(f \in C([0,1])\) and \(n\in \mathbb{N}\),

$$ \bigl\Vert f-G_{n}^{\alpha ,s}(f) \bigr\Vert \le C \omega \biggl( f;\frac{1}{n} \biggr) . $$
(4)

Remark 2.1

Estimate (4) yields the uniform convergence, as \(n\to \infty \), of \(G_{n}^{\alpha ,s}(f)\) to \(f, \forall f \in C([0,1]), \forall \alpha \ge 1\).

Proof

Since the \(G_{n}^{\alpha ,s}\) operator interpolates at \(x_{k}\), \(k=0,\ldots ,n\), let \(x\ne x_{k}\), \(k=0,\ldots ,n\). Then assume \(x_{j}\) to be the closest knot to x, with \(x_{j} < x <x_{j+1}\) (the case when \(x_{j+1} \) is the closest knot to x can be treated analogously). Therefore

$$ \vert x-x_{j}\vert \le \frac{1}{2n}. $$

We have

$$\begin{aligned} \bigl\vert f(x)-G_{n}^{\alpha ,s}(f;x) \bigr\vert & = \frac{\vert \sum _{k=0}^{n} [f(x)- f(x_{k})] [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] \vert }{ \sum _{k=0}^{n} [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] } \\ & \le \frac{ \omega ( f;\vert x-x_{j}\vert ) [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l \ne j}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }{ \sum _{k=0}^{n} [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] } \\ & \quad {}+ \frac{ \sum _{{k=0\atop k \ne j}}^{n} \omega ( f;\vert x-x_{k}\vert ) [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }{ \sum _{k=0}^{n} [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }. \end{aligned}$$

Since for \(b< a, \eta \in (b,a)\) and \(\alpha \ge 1\),

$$ a^{1/\alpha } - b^{1/\alpha } = \frac{a-b}{ \alpha \eta ^{1-1/\alpha } } \in \biggl( \frac{a-b}{ \alpha a^{1-1/\alpha } } , \frac{a-b}{ \alpha b ^{1-1/\alpha } } \biggr) , $$
(5)

working as usual (see e.g. [2]), it follows that

$$ \begin{aligned} \Biggl( \sum _{l=0}^{n} \frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{\frac{1}{\alpha }}- \Biggl( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{\frac{1}{\alpha }}& \le \frac{\frac{1}{\vert x-x_{k}\vert ^{s\alpha }}}{ \alpha ( \sum _{{l=0\atop l \ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{(\alpha -1)/\alpha }} \\ & \le \frac{C}{\alpha \vert x-x_{k}\vert ^{\alpha s}n^{s\alpha -s}}. \end{aligned} $$

Moreover,

$$ \frac{1}{\sum _{k=0}^{n} [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} - ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] } \le \frac{1}{ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne j}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}}. $$

Again by (5)

$$\begin{aligned} \Biggl( \sum_{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{\frac{1}{\alpha }}- \Biggl( \sum_{{l=0\atop l\ne j}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{\frac{1}{\alpha }} & \ge \frac{1/\vert x-x_{j}\vert ^{s\alpha }}{ \alpha ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{(\alpha -1)/\alpha }} \\ & := \Sigma . \end{aligned}$$
(6)

Hence by (6)

$$\begin{aligned} \frac{1}{\Sigma}& = \alpha \vert x-x_{j}\vert ^{s\alpha } \Biggl( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{1-1/\alpha } \\ & \le \alpha \vert x-x_{j}\vert ^{s\alpha (1-1/\alpha +1/\alpha )} \Biggl( \frac{1}{\vert x-x_{j}\vert ^{s\alpha } }+ \sum _{{l=0\atop l \ne j}}^{n} \frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{(\alpha -1)/\alpha } \\ & \le \alpha \vert x-x_{j}\vert ^{s} \Biggl( 1+\vert x-x_{j}\vert ^{s\alpha } \sum _{{l=0\atop l \ne j}}^{n} \frac{1}{\vert x-x_{l}\vert ^{s\alpha }} \Biggr) ^{(\alpha -1)/\alpha } \\ & \le C \alpha \vert x-x_{j}\vert ^{s}. \end{aligned}$$

Finally, collecting the above estimations, working as usual (see e.g. [2])

$$\begin{aligned} \bigl\vert f(x)-G_{n}^{\alpha ,s}(f;x)\bigr\vert &\le C \Biggl[ \omega \bigl( f; \vert x-x_{j} \vert \bigr) +\frac{\vert x-x_{j}\vert ^{s}}{n^{s\alpha -s}} \sum_{{k=0\atop k \ne j}}^{n} \frac{\omega ( f;\vert x-x_{k}\vert ) }{ \vert x-x_{k}\vert ^{\alpha s} } \Biggr] \\ &\le C \omega \biggl( f;\frac{1}{n} \biggr) . \end{aligned}$$

 □

Moreover, a pointwise approximation error estimate can be deduced.

Theorem 2.2

Let \(\alpha \ge 1\). Then, for any \(f \in C([0,1])\), \(n\in \mathbb{N}\) and for any \(x \in [0,1]\),

$$ \bigl\vert f(x)-G_{n}^{\alpha ,s}(f;x)\bigr\vert \le C \omega \bigl(f; \vert x-x_{j}\vert \bigr), $$

with \(x_{j}\) the closest knot to x.

Remark 2.2

From Theorem 2.2, for \(\alpha =1\), we obtain

$$ \bigl\vert f(x)-S_{n}^{s}(f;x)\bigr\vert \le C \omega \bigl(f; \vert x-x_{j}\vert \bigr). $$
(7)

This is the first pointwise estimate for Shepard operator on an equispaced mesh and it reflects the interpolatory character of \(G_{n}^{\alpha ,s}\) at the knots \(x_{k}\), \(k=0,\ldots ,n\) and the constants preservation property. A similar estimate was obtained for a generalization of Shepard operator in [9]. The result in (7) is interesting; indeed the Shepard operator is strongly influenced by the mesh distribution and pointwise error estimates, for Shepard operators on nonuniformly spaced meshes present a function depending on the mesh thickness at the r.h.s. (see e.g. [2, 4]); to the contrary for the equispaced case pointwise estimates as in [2, 4] are against nature.

Proof

Following the proof of Theorem 2.1 we have

$$\begin{aligned} \bigl\vert f(x)-G_{n}^{\alpha ,s}(f;x)\bigr\vert & \le C \biggl\{ \omega \bigl(f;\vert x-x_{j} \vert \bigr)+ \frac{\vert x-x_{j}\vert ^{s}}{n^{\alpha s-s}} \frac{\omega (f;\vert x-x_{j+1}\vert )}{\vert x-x_{j+1}\vert ^{\alpha s}} \\ & \quad {} + \frac{\vert x-x_{j}\vert ^{s}}{n^{\alpha s-s}} \sum_{{x\ne j\atop j+1}} \frac{\omega (f;\vert x-x_{k}\vert )}{\vert x-x_{k}\vert ^{\alpha s}} \biggr\} . \end{aligned}$$

Obviously

$$\begin{aligned} \frac{\vert x-x_{j}\vert ^{s}}{n^{\alpha s-s}} \frac{\omega (f;\vert x-x_{j+1}\vert )}{\vert x-x_{j+1}\vert ^{\alpha s}} & \le C \frac{\vert x-x_{j}\vert ^{s}}{n^{\alpha s-s}}\frac{\omega (f;\vert x-x_{j}\vert )}{\vert x-x_{j}\vert \vert x-x_{j+1}\vert ^{\alpha s-1}} \le C\omega \bigl(f;\vert x-x_{j}\vert \bigr). \end{aligned}$$

Moreover, since \(x-x_{k} > (j-k)/n\), \(k=0,\ldots ,j-1\),

$$ \begin{aligned} \frac{\vert x-x_{j}\vert ^{s}}{n^{\alpha s-s}} \sum _{k=0}^{j-1} \frac{\omega (f;\vert x-x_{k}\vert )}{\vert x-x_{k}\vert ^{\alpha s }} & \le C\vert x-x_{j}\vert ^{s}\omega \biggl( f;\frac{1}{n} \biggr) \sum _{k=0}^{j-1} \frac{(j-k)n^{s}}{(j-k)^{\alpha s}} \\ & \le C\vert x-x_{j}\vert ^{s} n^{s} \omega \biggl( f;\frac{1}{n} \biggr) \\ & \le C\vert x-x_{j}\vert ^{s} n^{s} \omega \biggl( f;\frac{\vert x-x_{j}\vert }{n\vert x-x_{j}\vert } \biggr) \\ & \le C\vert x-x_{j}\vert ^{s} n^{s} \biggl( 1+\frac{1}{n\vert x-x_{j}\vert } \biggr) \omega \bigl( f;\vert x-x_{j}\vert \bigr) \\ & \le C\omega \bigl( f;\vert x-x_{j}\vert \bigr). \end{aligned} $$

Similarly we work for \(k=j+1,\ldots ,n\).

Collecting all estimates, the assertion follows. □

Finally, we present the converse results for our operators.

Theorem 2.3

If \(f \ne \mathrm{constant}\)

$$ \limsup_{n \rightarrow \infty } \frac{\Vert G_{n}^{\alpha ,s}(f)-f \Vert }{\omega (f; 1/ {n})} \sim 1, $$
(8)

where the sign does not depend on f. Moreover

$$\begin{aligned}& \bigl\Vert G_{n}^{\alpha ,s}(f)-f \bigr\Vert =o\biggl( \frac{1}{n} \biggr)\quad \iff \quad f=\mathrm{constant}, \end{aligned}$$
(9)
$$\begin{aligned}& \bigl\Vert G_{n}^{\alpha ,s}(f)-f \bigr\Vert =O \biggl( \frac{1}{n} \biggr) \quad \iff \quad \omega (f;t) \le C t. \end{aligned}$$
(10)

Remark 2.3

First we observe that estimation (8) is a counterpart of (4) and is the analogous in some senses of the relation by Totik [28],

$$ \bigl\Vert B_{n}(f)-f \bigr\Vert \sim \omega ^{2}_{\psi } \biggl( f ; \frac{1}{\sqrt{n}} \biggr) , $$

with \(B_{n}\) the classical Bernstein operator, \(f \in C([0,1])\) and \(\omega ^{2}_{\psi }\) the second order modulus of smoothness of Ditzian and Totik where \(\psi (x)=\sqrt{x(1-x)}\). On the other hand, due to the interpolating behavior of \(G_{n}^{\alpha ,s}\), we cannot have the estimation (8) with “lim” (instead of “lim sup”) because of a result stated in [3, p. 77] (cf. also [7, Theorem 2.1, p. 310]).

From (8) we deduce that direct estimate (4) cannot be improved.

Combining estimation (8) with the equivalence relation (see, e.g. [29]) \(\omega (f;t) \sim K(f;t)\), with \(K(f)\) the K-functional allows one to characterize such K-functionals.

Finally, the saturation problem for \(G_{n}^{\alpha ,s}\) is settled by Eqs. (9)–(10).

Proof

We start to prove (8). From (2) we can write the operator \(G_{n}^{\alpha ,s}\) as

$$\begin{aligned}& G_{n}^{\alpha ,s}(f;x) = \sum_{k=0}^{n} g_{k}(x)f(x_{k}), \\& g_{k}(x) = \frac{ [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }{ \sum _{k=0}^{n} [ ( \sum _{l=0}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=0\atop l\ne k}}^{n}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] }. \end{aligned}$$

Now if we verify that

$$\begin{aligned}& G_{n}^{\alpha ,s}(f;x)=f(x), \quad \mbox{if } f= \mathrm{constant}, \end{aligned}$$
(11)
$$\begin{aligned}& \sum _{\vert x-x_{k}\vert \ge d_{0}} \bigl\vert g_{k}(x)\bigr\vert =o \biggl( \frac{1}{n} \biggr) , \quad d_{0} >0 \mbox{ arbitrarily fixed}, \end{aligned}$$
(12)
$$\begin{aligned}& g_{j}(x) >1/2, \quad \mbox{if } \vert x-x_{j}\vert \le \frac{\delta }{n}, \quad 0 \le \delta < d_{1}< 1, \end{aligned}$$
(13)
$$\begin{aligned}& \sum _{k \ne j }\vert x-x_{k} \vert \bigl\vert g_{k}(x) \bigr\vert \le d_{2} \frac{\delta ^{1+\epsilon }}{n}, \quad \delta \mbox{ as above}, \end{aligned}$$
(14)

with \(x_{j}\) again the closest knot to x and with certain positive fixed reals \(d_{1}\), \(d_{2}\), ϵ, then by using ([30, Theorem 2.1]) it follows that

$$\begin{aligned}& \limsup_{n \to \infty } n \bigl\Vert G_{n}^{\alpha ,s} (f)-f \bigr\Vert >CM(f), \\& M(f)= \sup_{x} \biggl( M(f;x);M(f;x):=\limsup _{\tau \to x} \frac{\vert f(\tau ) -f(x)\vert }{\vert \tau -x\vert } \biggr) . \end{aligned}$$
(15)

First we prove (11)–(14). We deduce Eq. (11) immediately by definition. Following the proofs of Theorems 2.12.2 we obtain

$$ \sum_{\vert x-x_{k}\vert >d_{0}} g_{k}(x) \le \frac{C}{n^{ \alpha s}} \sum_{\vert x-x_{k}\vert >d_{0}}\frac{1}{\vert x-x_{k}\vert ^{\alpha s}}\le \frac{C}{n^{\alpha s}} \frac{n+1}{d_{0}^{\alpha s}} =o \biggl( \frac{1}{n} \biggr) , $$

that is (12). Now we verify (13). Again working as in the proofs of Theorems 2.12.2,

$$\begin{aligned} \sum_{k \ne j } g_{k}(x) & \le \biggl[ \sum _{\vert x-x_{k} \vert \le 1}+\sum_{\vert x-x_{k} \vert \ge 1} \biggr] g_{k}(x) \\ & \le C \frac{\delta ^{s}n^{\alpha s}}{n^{\alpha s}} + C\frac{ \delta ^{s}n}{ n^{\alpha s} } \\ & \le C \delta ^{s} \biggl( 1+ \frac{1}{n^{\alpha s-1}} \biggr) \\ &\le \frac{1}{2} \end{aligned}$$

and by \(g_{k} (x)\ge 0\) and \(\sum g_{k}(x)=1\), (13) follows. Now we prove (14). Indeed

$$\begin{aligned} \sum _{k \ne j} \vert x-x_{k}\vert g_{k}(x) & \le C \frac{\vert x-x_{j}\vert ^{s}}{n^{\alpha s-s}} \biggl[ \sum _{ \vert x-x_{k}\vert \le 1 } +\sum _{\vert x-x_{k}\vert >1} \biggr] \frac{\vert x-x_{k}\vert }{\vert x-x_{k}\vert ^{\alpha s}} \\ & \le C\frac{\delta ^{s}}{n^{\alpha s}} \bigl[ n^{\alpha s-1} +n \bigr] \\ &\le C \frac{\delta ^{1+\epsilon }}{n}, \end{aligned}$$

i.e. we deduce (14). From (15) and (4) we have (cf. [7, p. 315])

$$\begin{aligned} C_{1}M(f) & \le n \bigl\Vert G_{n}^{\alpha ,s}(f) -f \bigr\Vert \le C_{2} n \omega \biggl( f; \frac{1}{n} \biggr) \\ & \le C_{2} \sup_{\tau \ne t}\frac{\vert f(\tau )-f(t)\vert }{\vert \tau -t\vert }:=C_{2}N(f). \end{aligned}$$
(16)

Now we recall that ([7, Lemma 3.1, p. 315])

$$ M(f)=N(f). $$

Therefore

$$ C_{1} M(f) \le C_{2} n \omega \biggl( f; \frac{1}{n} \biggr) \le C_{2} M (f) $$
(17)

and from (4), (16) and (17) we deduce (8). The proofs of (9) and (10) are omitted since they are analogous to the proof of Theorem 2.2 p. 316 in [7]. □

2.2 Application to image compression

In this Section we apply the \(G_{n}^{\alpha ,s}\) operator to a problem of image compression. An image can be considered from a mathematical point of view as a matrix of size \(M\times N\) pixels, where the number of pixels affects resolution of an image and the size of he file that stores it (the higher the number of pixels, the better its resolution, the larger the file). As a degraded (compressed) image, we split the original image into consecutive blocks of size \(\mathcal{B}\times \mathcal{B}\), choosing only the left-upper pixel from each block. We obtain a new image with a lower number of pixels (\(M/\mathcal{B}\times N/\mathcal{B}\) pixels), and therefore a worse resolution and a smaller size of the file. The resulting compression ratio is \(\rho \simeq \mathcal{B}^{2}\). We aim at decompressing the reduced image to rebuild the full resolution one. Since the sensors of the cameras are uniformly distributed according to a bidimensional grid, we need a bidimensional interpolation process based on equispaced mesh; in addition, for physical reasons related to the range of the color intensity of the red, green and blue components (\([0,1]\)), it is preferable to rely on a positive operator. Therefore we consider the bidimensional operator \(G_{M,N}^{\alpha ,s}(f)\) defined by

$$\begin{aligned} \begin{aligned} & G_{M,N}^{\alpha ,s}(f;x,y) = \sum _{k=1}^{M} \sum _{i=1}^{N}g_{k,M}(x) g_{i,N}(y)f(x_{k},y_{i}), \\ &g_{k,M}(x) = \frac{ ( \sum _{l=1}^{M}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=1\atop l\ne k}}^{M}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} }{ \sum _{k=1}^{M} [ ( \sum _{l=1}^{M}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=1\atop l\ne k}}^{M}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] } \\ &\hphantom{g_{k,M}(x) } = \frac{ ( \sum _{l=1}^{M} \prod _{j\ne l} \vert x-x_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }}- ( \sum _{{l=1\atop l\ne k}}^{M} \prod _{j\ne l} \vert x-x_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }} }{ \sum _{k=1}^{M} [ ( \sum _{l=1}^{M} \prod _{j\ne l} \vert x-x_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }}- ( \sum _{{l=1\atop l\ne k}}^{M} \prod _{j\ne l} \vert x-x_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }} ] }, \\ &g_{i,N}(y) = \frac{ ( \sum _{l=1}^{N}\frac{1}{\vert y-y_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum _{{l=1\atop l\ne i}}^{N}\frac{1}{\vert x-x_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} }{ \sum_{k=1}^{N} [ ( \sum_{l=1}^{N}\frac{1}{\vert y-y_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }}- ( \sum_{{l=1\atop l\ne i}}^{N}\frac{1}{\vert y-y_{l}\vert ^{s\alpha }} ) ^{\frac{1}{\alpha }} ] } \\ &\hphantom{g_{k,M}(x) } = \frac{ ( \sum_{l=1}^{N}\prod_{j\ne l} \vert y-y_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }}- ( \sum_{{l=1\atop l\ne i}}^{N}\prod_{j\ne l} \vert y-y_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }} }{ \sum_{k=1}^{N} [ ( \sum_{l=1}^{N}\prod_{j\ne l} \vert y-y_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }}- ( \sum_{{l=1\atop l\ne i}}^{N}\prod_{j\ne l} \vert y-y_{j}\vert ^{s\alpha } ) ^{\frac{1}{\alpha }} ] }, \end{aligned} \end{aligned}$$
(18)

with \(x,y\in [0,1]\), \(x_{i}=(i-1)/(M-1)\), \(i=1,\ldots ,M\), \(y_{j}=(j-1)/(N-1)\), \(j=1,\ldots ,N\). We observe that for computer calculations the nonbarycentric-type representations at the right hand side in (18) are suitable. We can write Eq. (18) as

$$ G_{M,N}^{\alpha ,s}(f;x,y) = \sum_{k=1}^{M} \Biggl[ \sum_{i=1}^{N} f(x_{k},y_{i})g_{i,N}(y) \Biggr] g_{k,M}(x). $$

This allows one to develop a two-step procedure, each one involving the same unidmensional operator of the type (2) applied first to the rows of the matrix of pixels and then to the columns of the matrix resulting after application of the first step (or vice versa).

We will compare the results obtained by the \(G_{M,N}^{\alpha ,s}\) operator with bi-linear, bi-cubic and bi-spline methods. For the comparison we used the Signal-to-Noise Ratio, SNR, defined as

$$ \mathrm{SNR}=10 \log _{10} \frac{(2^{B}-1)^{2}}{\mathrm{MSE}}, $$

with B denoting the number of bits necessary to represent the intensity of the pixels and

$$ \mathrm{MSE}=\frac{1}{MN} \sum_{i=1}^{M} \sum_{j=1}^{N} (f_{ij} - \hat{f}_{ij})^{2}, $$

where \(f_{ij}\) is the original image in the pixels i, j, \(i=1,\ldots ,M\), \(j=1,\ldots ,N\), and \(\hat{f}_{ij}\) is the resulting image after decompression by the original bidimensional Shepard operator, \(G_{M,N}^{\alpha ,s}\) operator, bi-linear, bi-cubic and bi-spline functions. The SNR compares the level of the compression error to the level of the signal: the higher SNR, the better the approximation of the original image.

By construction of the \(G_{M,N}^{\alpha ,s}\) operators (cf. (3)) there are better approximate images that can be represented by piecewise constant functions; therefore a synthetic image having such a feature will be considered. We notice that tuning of the parameter α permits one to get a better approximation error.

According to the comment above we consider as an example of image a chessboard (Fig. 1) with 2048 pixels for both coordinates (\(M=N=2048\)) having 20 alternating boxes for each row or column of the chessboard. The usual 8-bit gray scale representation is considered for the color, so that \(B=8\). We generated reduced resolution images at compression ratios \(\rho =4,9,16,25,36\) (\(\mathcal{B}=2,\ldots ,6\)).

Figure 1
figure 1

Image of chessboard chosen as a test example

The value of SNR for bi-linear, bi-cubic, bi-spline, Shepard (\(s=4,6\)), \(G_{M,N}^{\alpha ,4}\), \(G_{M,N}^{\alpha ,6}\) operators, with \(\alpha =1.1,1.3,2,3,5,10\) and compression ratio \(\rho =4,9,16,25,36\), is shown in Table 1.

Table 1 SNR of the decompressed images for the Chessboard test example at compression ratio \(\rho =4,9,16,25,36\) and for bi-linear, bi-cubic, bi-spline, Shepard (\(s=4,6\)), \(G_{M,N}^{\alpha ,4}\), \(G_{M,N}^{\alpha ,6}\) operators, \(\alpha =1.1,1.3,2,3,5,10\). The higher SNR, the more accurate the methodology

We can see that the Shepard–Gupta-type operator (18) gives the best results at any compression ratio and that accuracy improves when α increases.

Figure 2 shows the decompressed images for bi-linear, bi-cubic, bi-spline, Shepard (\(s=4,6\)), \(G_{M,N}^{\alpha ,4}\), \(G_{M,N}^{\alpha ,6}\) operators, \(\alpha =2,10\), obtained for compression ratio 25. We notice the gray color of the truly white boxes in the chessboard for bi-spline and bi-cubic operators (middle and right upper plots). It is due to overshoots (pixels having intensities greater than 1) and undershoots (pixels with intensity less than 0). As is well known these artifacts are particularly deleterious for images. Bi-linear and Shepard–Gupta-type operators being stable in the Fejér sense do not suffer from this artifact.

Figure 2
figure 2

From top to bottom and left to right: the chessboard image decompressed by bi-linear, bi-cubic, bi-spline, Shepard (\(s=4\)), \(G_{M,N}^{2,4}\), \(G_{M,N}^{10,4}\), Shepard (\(s=6\)), \(G_{M,N}^{2,6}\) and \(G_{M,N}^{10,6}\) operators starting from the image compressed with ratio 25

To better appreciate this artifact and differences among the above methodologies, Fig. 3 shows the (absolute) error of the decompressed images for only bi-cubic and bi-spline operators at different compression ratios (\(\rho =9\), 25, 49), since the other operators are not affected by the overshoot-undershoot artifact. Overshoots and undershoots are represented with red and blue color, respectively.

Figure 3
figure 3

Error of the decompressed images for the chessboard test example for the considered methods (particular). From top to bottom and left to right bi-cubic and bi-spline for \(\rho =9\), bi-cubic and bi-spline for \(\rho =25\), bi-cubic and bi-spline for \(\rho =49\). Blue and red colors indicate undershoots and overshoots, respectively

A full assessment of all considered methods is graphically given in Fig. 4 in a particularization of Fig. 3. The figure shows the smaller error (higher SNR) achieved by the Shepard–Gupta-type method.

Figure 4
figure 4

Error of the decompressed images for the Chessboard test example for the considered methods (particular). From top to bottom and left to right: bi-linear, bi-cubic, bi-spline, Shepard (\(s=4\)), \(G_{M,N}^{2,4}\), \(G_{M,N}^{10,4}\), Shepard (\(s=6\)), \(G_{M,N}^{2,6}\), \(G_{M,N}^{10,6}\) operators for compression ratio \(\rho =25\). Blue and red colors indicate undershoots and overshoots, respectively

3 Conclusions

The paper gives a positive answer to the problem to extend the Bézier variant technique introduced and studied by Gupta for the well-known linear positive operators of Bernstein-type, to the Shepard interpolator operator, widely used in rational approximation and scattered data interpolation problems. The authors construct and study the Shepard–Gupta-type operator and settle convergence results, uniform and pointwise approximation error estimates, converse theorems and saturation statements, improving in some sense analogous results for the original Shepard-type operator. The peculiar asymptotic behavior of the Shepard–Gupta-type operator allows one to successfully compress images represented by piecewise constants, improving previous algorithms.

References

  1. Della Vecchia, B.: Direct and converse results by rational operators. Constr. Approx. 12, 271–285 (1996). https://doi.org/10.1007/BF02433043

    Article  MathSciNet  MATH  Google Scholar 

  2. Della Vecchia, B., Mastroianni, G.: Pointwise simultaneous approximation by rational operators. J. Approx. Theory 65, 140–150 (1991). https://doi.org/10.1016/0021-9045(91)90099-V

    Article  MathSciNet  MATH  Google Scholar 

  3. Della Vecchia, B., Mastroianni, G., Totik, V.: Saturation of the Shepard operators. Approx. Theory Appl. 6(4), 76–84 (1990)

    MathSciNet  MATH  Google Scholar 

  4. Della Vecchia, B., Mastroianni, G., Vertesi, P.: Direct and converse theorems for Shepard rational approximation. Numer. Funct. Anal. Optim. 17, 537–561 (1996). https://doi.org/10.1080/01630569608816709

    Article  MathSciNet  MATH  Google Scholar 

  5. Somorjai, G.: On a saturation problem. Acta Math. Acad. Sci. Hung. 32, 377–381 (1978). https://doi.org/10.1007/BF01902372

    Article  MathSciNet  MATH  Google Scholar 

  6. Szabados, J.: On a problem of R DeVore. Acta Math. Acad. Sci. Hung. 27, 219–223 (1976). https://doi.org/10.1007/BF01896777

    Article  MathSciNet  MATH  Google Scholar 

  7. Vertesi, P.: Saturation of the Shepard operator. Acta Math. Hung. 72(4), 307–317 (1996). https://doi.org/10.1007/BF00114543

    Article  MathSciNet  MATH  Google Scholar 

  8. Allasia, G.: A class of interpolatory positive linear operators: theoretical and computational aspects. In: Approximation Theory, Wavelets and Applications. NATO ASI Series C, vol. 454, pp. 1–36 (1995). https://doi.org/10.1007/978-94-015-8577-4_1

    Google Scholar 

  9. Amato, U., Della Vecchia, B.: New results on rational approximation. Results Math. 67, 345–364 (2015). https://doi.org/10.1007/s00025-014-0420-4

    Article  MathSciNet  MATH  Google Scholar 

  10. Amato, U., Della Vecchia, B.: Modelling by Shepard-type curves and surfaces. J. Comput. Anal. Appl. 20, 611–634 (2016)

    MathSciNet  MATH  Google Scholar 

  11. Amato, U., Della Vecchia, B.: Weighting Shepard-type operators. Comput. Appl. Math. 36, 885–902 (2016). https://doi.org/10.1007/s40314-015-0263-y

    Article  MathSciNet  MATH  Google Scholar 

  12. Amato, U., Della Vecchia, B.: Inequalities on Shepard-type operators. J. Math. Inequal. 12(2), 517–530 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Amato, U., Della Vecchia, B.: Rational operators based on q-integers. Results Math. 72(3), 1109–1128 (2017). https://doi.org/10.1007/s00025-017-0682-8

    Article  MathSciNet  MATH  Google Scholar 

  14. Wu, Y.-H., Hung, M.-C.: Comparison of spatial interpolation techniques using visualization and quantitative assessment. In: Hung, M. (ed.) Applications of Spatial Statistics, pp. 17–34. IntechOpen, London (2016). https://doi.org/10.5772/65996

    Google Scholar 

  15. Li, L., Zhou, X., Kalo, M., Piltner, R.: Spatiotemporal interpolation methods for the application of estimating population exposure to fine particulate matter in the contiguous U.S. and a real-time web application. Int. J. Environ. Res. Public Health 13, 749 (2016). https://doi.org/10.3390/ijerph13080749

    Article  Google Scholar 

  16. Hammoudeh, M., Newman, R., Dennett, C., Mount, S.: Interpolation techniques for building a continuous map from discrete wireless sensor network data. Wirel. Commun. Mob. Comput. 13, 809–827 (2013). https://doi.org/10.1002/wcm.1139

    Article  Google Scholar 

  17. Szalkai, I., Sebestyén, A., Della Vecchia, B., Kristóf, T., Kótai, L., Bódi, F.: Comparison of 2-variable interpolation methods for predicting the vapour pressure of aqueous glycerol solutions. Hung. J. Ind. Chem. 43, 67–71 (2015). https://doi.org/10.1515/hjic-2015-0011

    Google Scholar 

  18. Gupta, V.: The Bézier variant of Kantorovich operators. Comput. Math. Appl. 47, 227–232 (2004). https://doi.org/10.1016/S0898-1221(04)90019-3

    Article  MathSciNet  MATH  Google Scholar 

  19. Gupta, V.: Simultaneous approximation for Szasz–Mirakyan–Durrmeyer operators. J. Math. Anal. Appl. 328, 101–105 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  20. Gupta, V., Doĝru, O.: Approximation of bounded variation functions by a Bézier variant of the Bleimann, Butzer, and Hahn operators. Int. J. Math. Math. Sci. 2006, Article ID 37253 (2006). https://doi.org/10.1155/IJMMS/2006/37253

    MATH  Google Scholar 

  21. Gupta, V., Karsli, H.: Rate of convergence for the Bézier variant of the MKZD operators. Georgian Math. J. 14(4), 651–659 (2007)

    MathSciNet  MATH  Google Scholar 

  22. Gupta, V., Lupas, A.: On the rate of approximation for the Bézier variant of Kantorovich–Balazs operators. Gen. Math. 12, 3–18 (2004)

    MathSciNet  MATH  Google Scholar 

  23. Gupta, V., Vasishtha, V., Gupta, M.K.: Rate of convergence of the Szasz–Kantorovich–Bézier operators for bounded variation functions. Publ. Inst. Math. 72(80), 137–143 (2002). https://doi.org/10.2298/PIM0272137G

    Article  MATH  Google Scholar 

  24. Gupta, V., Zeng, X.: Rate of approximation for the Bézier variant of Balazs Kantorovich operators. Math. Slovaca 57(4), 349–358 (2007). https://doi.org/10.2478/s12175-007-0029-0

    Article  MathSciNet  MATH  Google Scholar 

  25. Zeng, X.M., Gupta, V.: Rate of convergence of Baskakov-Bézier type operators for locally bounded functions. Comput. Math. Appl. 44, 1445–1453 (2002). https://doi.org/10.1016/S0898-1221(02)00269-9

    Article  MathSciNet  MATH  Google Scholar 

  26. Gordon, W.J., Wixon, J.A.: Shepard’s method of “Metric Interpolation” to bivariate and multivariate interpolation. Math. Compet. 32, 253–264 (1978). https://doi.org/10.2307/2006273

    MathSciNet  MATH  Google Scholar 

  27. Szalkai, I., Della Vecchia, B.: Finding better weight functions for generalized Shepard’s operator on infinite intervals. Int. J. Comput. Math. 88, 2838–2851 (2011). https://doi.org/10.1080/00207160.2011.559542

    Article  MathSciNet  MATH  Google Scholar 

  28. Totik, V.: Approximation by Bernstein polynomials. Am. J. Math. 116(4), 995–1018 (1994). https://doi.org/10.2307/2375007

    MathSciNet  MATH  Google Scholar 

  29. Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer, New York (1987). https://doi.org/10.1007/978-1-4612-4778-4

    Book  MATH  Google Scholar 

  30. Hermann, T., Vertesi, P.: On the method of Somorjai. Acta Math. Hung. 54(3–4), 253–262 (1989). https://doi.org/10.1007/BF01952055

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to an anonymous referee for his/her stimulating remarks that improved the paper.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Funding

The authors declare that they have no specific funds to acknowledge.

Author information

Authors and Affiliations

Authors

Contributions

BDV introduced the variant of the operator and studied the corresponding approximation results. UA applied such operator to the image compression problem. Both authors read and approved the final manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Biancamaria Della Vecchia.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amato, U., Della Vecchia, B. On Shepard–Gupta-type operators. J Inequal Appl 2018, 232 (2018). https://doi.org/10.1186/s13660-018-1823-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-018-1823-7

MSC

Keywords