Open Access

Approximation of multidimensional stochastic processes from average sampling

Journal of Inequalities and Applications20122012:246

https://doi.org/10.1186/1029-242X-2012-246

Received: 30 April 2012

Accepted: 8 October 2012

Published: 24 October 2012

Abstract

The convergence property of sampling series, the estimate of truncation error in the mean square sense and the almost sure results on sampling theorem for multidimensional stochastic processes from average sampling are analyzed. These results are generalization of the classical results which were given by Balakrishnan (IRE Trans. Inf. Theory 3(2):143-146, 1957) and Belyaev (Theory Probab. Appl. 4(4):437-444, 1959) for random signals. Using inequalities in the mean square sense, the results of Pogány and Peruničić (Glas. Mat. 36(1):155-167, 2001) were improved too.

MSC:42C15, 60G10, 94A20.

Keywords

sampling theoremsmean square sampling reconstructionalmost sure sampling reconstructionscalar and vectorial wide sense stationary processes

1 Introduction

Many mathematicians and engineers have discussed the Shannon sampling theorem or Whittaker-Kotel’nikov-Shannon sampling theorem (also called the WKS sampling theorem by some authors) for deterministic signals [17]. Kolmogorov, who is the most famous mathematician in the last century, drew the information theorists’ attention that Kotel’nikov had published the deterministic sampling theorem 16 years before Shannon and mentioned the stationary flow of new information investigation at 1956 Symposium on Information theory [8]. Soon after that, Balakrishnan [9] proved that a bandlimited random signal can be recovered from its sampled values in an L 2 (i.e., mean square) sense, and Belyaev [10] proved that a bandlimited random signal can be recovered from its sampled values in the almost sure (with probability one) sense. For other results on bandlimited random signals, see [1115].

In practice, signals and their sampled values are often not given in an ideal shape. For example, due to physical reasons, e.g., the inertia of the measurement apparatus, the sampled values of a signal obtained in practice may not be the exact values of X ( t , ω ) at times t k . They are just local averages of X ( t , ω ) near t k . These are integrals of the stochastic process X ( t , ω ) in small time intervals. In 2007, Song, Sun, Yang etc. [16, 17] gave some surprising results on the average sampling theorems for univariate bandlimited processes in the L 2 sense. In 2009, Song, Wang and Xie [18] proved that second-order moment processes can be approximated by average sampling. Recently, He and Song [19] proved that a real-valued weak stationary process can be approximated by its local average in the almost sure sense. For reference to the results on average sampling for deterministic signals, see Gröchenig [20], Djokovic and Vaidyanathan [21], Aldroubi [22], Sun and Zhou [23].

Recently, quite a few researchers have started to discuss the Shannon sampling theorem for multi-band deterministic signals; see [2426]. Following their idea and using some results on bandlimited random signals from [2729], we give some results on the average sampling theorems for multi-band processes in this paper.

Before stating the result of new work, we first introduce some notations. L p ( R ) is the space of all measurable functions on for which f p < + , where

B π Ω , 2 is the set of all entire functions f of exponential type with type at most π Ω that belong to L 2 ( R ) when restricted to the real line; see [5]. By the Paley-Wiener theorem, a square integrable function f is bandlimited to [ π Ω , π Ω ] if and only if f B π Ω , 2 .

Given a probability space ( W , A , P ) , a real or complex valued stochastic process X ( t ) : = X ( t , ω ) defined on R × W is said to be stationary in a weak sense if for all t R , E [ X ( t ) X ( t ) ¯ ] < , X ( t ) ¯ is the complex conjugate of X ( t ) , and the autocorrelation function
R X ( t , t + τ ) : = W X ( t , ω ) X ( t + τ , ω ) ¯ d P ( ω )

is independent of t R , i.e., R X ( t , t + τ ) depends only on τ R . For this reason, we will use R X ( τ ) to denote R X ( t , t + τ ) .

The following results on a one-dimensional real- or complex-valued stochastic process are known. Recall that the function sinc is defined as
sinc ( t ) = { sin π t π t , if  t 0 ; 1 , if  t = 0 .
(1.1)

Proposition A ([9], Theorem 1)

Let X ( t ) , < t < , be a real or complex valued stochastic process, stationary in the ‘wide sense’ (or ‘second-order’ stationary) and with a spectral density vanishing outside the interval of angular frequency [ π Ω , π Ω ] . Then X ( t ) has the representation
X ( t ) = lim n = X ( n Ω ) sinc ( Ω t n )
(1.2)
for every t, where lim stands for limit in the mean square sense, i.e.,
lim N E { | X ( t ) n = N N X ( n Ω ) sinc ( Ω t n ) | 2 } = 0 .
(1.3)

Proposition B ([10], Theorem 5)

Let X ( t ) , < t < + be a process with a bounded spectrum. If its covariance has the form
R X ( τ ) = π Ω π Ω e i τ λ d F ( λ ) ,
(1.4)
where F ( λ ) is a spectral function of X ( t ) , then for any fixed number Ω > Ω and almost all sampling functions, the formula
X ( t , ω ) = k = X ( k Ω , ω ) sinc ( Ω t k )
(1.5)
is valid, i.e.,
P { X ( t , ω ) = lim N k = N N X ( k Ω , ω ) sinc ( Ω t k ) } = 1 .
(1.6)
Given a probability space ( W , A , P ) , a d-dimensional real- or complex-valued stochastic process X ( t ) : = X ( t , ω ) = ( X 1 ( t ) , X 2 ( t ) , , X d ( t ) ) defined on R d × W is said to be stationary in a weak sense if for all t R and i = 1 , 2 , , d , E [ X i ( t ) X i ( t ) ¯ ] < , and the autocorrelation function
R X i X j ( t , t + τ ) : = W X i ( t , ω ) X j ( t + τ , ω ) ¯ d P ( ω )

is independent of t R , i.e., the functions R X i X j ( t , t + τ ) depend on τ R only. We will use R X i X j ( τ ) to denote R X i X j ( t , t + τ ) ; see [28].

A d-dimensional weak sense stationary process X ( t ) is said to be bandlimited to an interval [ π Ω , π Ω ] if and only if R X , X ( τ ) belongs to B π Ω , 2 . More precisely, this means that R X j X k ( τ ) , j , k = 1 , 2 , , d belongs to B π Ω j k , 2 , i.e.,
R X j X k ( τ ) = π Ω j k π Ω j k e i τ λ d F j k ( λ ) ,
(1.7)

and Ω = max ( Ω j k , j , k = 1 , 2 , , d ) .

It is well known that all metrics on R d are equivalent. Thus, we will apply the max-norm of X ( t ) :
X ( t ) = max i = 1 , 2 , , d E [ X i ( t ) X i ( t ) ¯ ] .
(1.8)
The measured sampled values for X ( t ) for t k , k Z are
X ( ) , u k = X ( t ) u k ( t ) d t
(1.9)
for some collection of averaging functions u k ( t ) = ( u 1 k ( t ) , u 2 k ( t ) , , u d k ( t ) ) , k Z , which are convex functions satisfying the following properties:
supp u i k [ t k σ i k , t k + σ i k ] , u i k ( t ) 0 , i = 1 , 2 , , d , and u i k ( t ) d t = 1 ,
(1.10)

where δ i / 2 σ i k , σ i k δ i , δ i are some positive numbers. In this paper, we will use notations δ = max { δ 1 , δ 2 , , δ d } and δ = min { δ 1 , δ 2 , , δ d } .

The following results on a d-dimensional real stochastic process were known in 2001.

Proposition C ([28], Theorem 2)

Let X ( t ) , < t < + be a d-dimensional weak sense stationary process with a bounded spectrum and cross-correlation functions R X j X k ( τ ) satisfying (1.7), then we have
lim N X ( t ) ( X 1 ( t ) , X 2 ( t ) , , X d ( t ) ) = 0 ,
(1.11)
where
X i ( t ) = n i = N i N i X ( n i Ω i i ) sinc ( Ω i i t n i ) , i = 1 , 2 , , d ,

N = min { N 1 , N 2 , , N d } and Ω i i > Ω i i are fixed numbers.

Proposition D ([28], Theorem 3)

Let X ( t ) , R X j X k ( τ ) , X i ( t ) , N and Ω i i be as in Proposition  C, then we have
P { X ( t ) = lim N ( X 1 ( t ) , X 2 ( t ) , , X d ( t ) ) } = 1 .
(1.12)

2 Lemmas and the main results

Let us introduce some preliminary results first.

Lemma 2.1 ([16], Lemma 2.1)

For any Ω > 0 and p , q > 1 satisfying 1 / p + 1 / q = 1 and Ω > 0 , we have
k = | sinc ( Ω t k π ) | q 1 + ( 2 π ) q q q 1 < p .
(2.1)
Lemma 2.2 Suppose that X ( t , ω ) is a weak sense stationary stochastic process with its autocorrelation function R X X belonging to B π Ω , 2 and satisfying R X X ( t ) C ( R ) . For all j Z + , and | σ | δ , | σ | δ , let
D ( j / Ω ; δ ) : = sup | R X X ( j / Ω ) R X X ( j / Ω σ ) R X X ( j / Ω + σ ) + R X X ( j / Ω + σ σ ) | = sup | σ 0 σ 0 R X X ( j / Ω + u + v ) d u d v | .
Then for all r , M , N Z + , we have
j = M N [ D ( j / Ω ; δ ) ] r ( M + N + 1 ) δ 2 r R X X ( t ) r .
(2.2)
Proof Since R X X is even and R X X ( t ) C ( R ) , we have

The proof is now completed. □

Following Belyaev’s traces in [14], we easily conclude the following results.

Lemma 2.3 Suppose that X ( t , ω ) is a weak sense stationary stochastic process with its autocorrelation function R X X belonging to B π Ω , 2 . For any Ω > Ω , we have
(2.3)
The following results are the main results in this paper. To state these results, we introduce the following notations. For any integer k,
X i ( k / Ω , ω ) = k / Ω σ i k k / Ω + σ i k u i k ( t ) X i ( t , ω ) d t
(2.4)
and
X ( k / Ω , ω ) = [ X 1 ( k / Ω , ω ) , X 2 ( k / Ω , ω ) , , X d ( k / Ω , ω ) ] ,
(2.5)

where { u k ( t ) } is a sequence of continuous functions defined by (1.10).

For M i , N i Z + , i = 1 , 2 , , d , we define M = max { M 1 , M 2 , , M d } , N = max { N 1 , N 2 , , N d } , and M = min { M 1 , M 2 , , M d } , N = min { N 1 , N 2 , , N d } ,
X i ( t , M i , N i , ω ) = k = M i N i k / Ω σ i k k / Ω + σ i k u i k ( t ) X i ( t , ω ) d t
(2.6)
and
(2.7)
Theorem 2.4 Suppose that X ( t ) is a weak sense stationary stochastic process with its correlation function R X X belonging to B π Ω , 2 . For Ω > Ω > 2 , M i , N i 10 , i = 1 , 2 , , d , and 1 / δ 2 min { M i N i } 10 4 , the following is valid:
(2.8)
Consequently,
lim M , N E [ | k = M N [ X ( k / Ω , ω ) X ( k / Ω , ω ) ] sinc ( Ω t k ) | 2 ] = 0 ,
(2.9)

where M = min { M 1 , M 2 , , M d } , N = min { N 1 , N 2 , , N d } .

Theorem 2.5 Suppose that X ( t ) is a weak sense stationary stochastic process with its correlation function R X X belonging to B π Ω , 2 and satisfying R X i , X i ( t ) C ( R ) . Then for Ω > Ω 2 and δ 1 / N , we have
(2.10)

Obviously, when u i k ( t ) = δ ( k n ) , i = 1 , 2 , , d , where δ stands for the Dirac delta-function, Theorem 2.4 and Theorem 2.5 reduce to Proposition C and Proposition D, respectively. When d = 1 , we recover Proposition A and Proposition B, respectively.

3 Proof of the main results

Proof of Theorem 2.4 Using Proposition A and following the methods in [16], we have
Applying Hölder’s inequality, we get
where 1 / p + 1 / q = 1 . By the Hausdorff-Young inequality (see [30], p.120), we have
where 0 1 / s + 1 / r 1 = 1 / p . Let r = ln ( M i N i ) / 4 . Notice that M i , N i 10 and M i N i 10 4 , we have
( 2 N i + 2 M i + 1 ) 1 / r 1.8260 e 4 .
Let s = 2 r / ( 2 r 1 ) and s = 2 r . Then 1 / s + 1 / s = 1 and p = 2 r = ln ( M i N i ) / 2 . We get
( j = | sinc ( Ω t j π ) | s ) 1 / s 1 + 2 π ( s s 1 ) 1 / s = ( 1 p + 2 π ( p ) 1 / p ) p ln ( M i N i ) π .
Hence, it holds

Thus (2.8) is valid. The second assertion of the theorem follows immediately. This completes the proof. □

Proof of Theorem 2.5 Define
(3.1)
(3.2)
(3.3)
Using Theorem 2.4, we have
(3.4)
Thus, for any fixed t, we have
n = N E | Y X i N ( t , ω ) | 2 < .
(3.5)
Thus, the series converges uniformly if t lies in any bounded interval. Using the Chebyshev inequality, for all ε > 0 , t lies in any bounded interval, we have
P { max i | X i ( t , ω ) X i ( t , N , N , N , N , ω ) | ε , i = 1 , 2 , , d } < n = N O ( n 2 ) < .
(3.6)
Using the famous Borel-Cantelli lemma, we have
P { max i | X i ( t , ω ) X i ( t , N , N , N , N , ω ) | ε , i = 1 , 2 , , d } = 0 .
(3.7)
In other words,
P { X ( t , ω ) = lim N k = N N X ( k / Ω , ω ) sinc ( Ω t k ) } = 1 .
(3.8)

The proof is completed. □

Conclusions

In this paper, we have analyzed the convergence property of sampling series, the estimate of truncation error in the mean square sense and the almost sure results on sampling theorem for multidimensional random signals from average sampling. The proposed results significaently improve the classical Shannon sampling theorem.

Declarations

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 60872161, 60932007). The author would like to express his sincere gratitude to anonymous referees and professors Yanxia Ren and Renming Song for many valuable suggestions and comments which helped him to improve the paper.

Authors’ Affiliations

(1)
School of Science & SKL of HESS, Tianjin University, Tianjin, China

References

  1. Shannon CE: Communication in the presence of noise. Proc. IRE 1949, 37(1):865–886.MathSciNetView ArticleGoogle Scholar
  2. Unser M: Sampling-50 years after Shannon. Proc. IEEE 2000, 88(4):569–587.View ArticleGoogle Scholar
  3. Xia XG, Nashed MZ: A method with error estimate for band-limited signal extrapolation from inaccurate data. Inverse Probl. 1997, 13: 1641–1661. 10.1088/0266-5611/13/6/015MathSciNetView ArticleGoogle Scholar
  4. Qian L: On the regularized Whittaker-Kotel’nikov-Shannon sampling formula. Proc. Am. Math. Soc. 2002, 131(4):1169–1176.View ArticleGoogle Scholar
  5. Marvasti F: Nonuniform Sampling: Theory and Practice. Kluwer Academic/Plenum, New York; 2001.View ArticleGoogle Scholar
  6. Smale S, Zhou DX: Shannon sampling and function reconstruction from point values. Bull. Am. Math. Soc. 2004, 41(3):279–305. 10.1090/S0273-0979-04-01025-0MathSciNetView ArticleGoogle Scholar
  7. Boche H, Mönich UJ: Behavior of the quantization operator for bandlimited, non-oversampling signals. IEEE Trans. Inf. Theory 2010, 56(5):2433–2440.View ArticleGoogle Scholar
  8. Kolmogorov AN: On the Shannon theory of transmission in the case of continuous signals. IRE Trans. Inf. Theory 1956, 2(4):102–108. 10.1109/TIT.1956.1056823View ArticleGoogle Scholar
  9. Balakrishnan AV: A note on the sampling principle for continuous signals. IRE Trans. Inf. Theory 1957, 3(2):143–146. 10.1109/TIT.1957.1057404View ArticleGoogle Scholar
  10. Belyaev YK: Analytical random processes. Theory Probab. Appl. 1959, 4(4):437–444.MathSciNetView ArticleGoogle Scholar
  11. Lloyd SP: A sampling theorem for stationary (wide sense) stochastic random processes. Trans. Am. Math. Soc. 1959, 92(4):1–12.MathSciNetGoogle Scholar
  12. Piranashvili PA: On the problem of interpolation of random processes. Theory Probab. Appl. 1967, 12: 647–657. 10.1137/1112079View ArticleGoogle Scholar
  13. Splettstösser W: Sampling series approximation of continuous weak sense stationary processes. Inf. Control 1981, 50: 228–241. 10.1016/S0019-9958(81)90343-0View ArticleGoogle Scholar
  14. Seip K: A note on sampling of bandlimited stochastic Processes. IEEE Trans. Inf. Theory 1990, 36(5):1186. 10.1109/18.57226View ArticleGoogle Scholar
  15. Houdré C: Reconstruction of band limited processes from irregular samples. Ann. Probab. 1995, 23(2):674–695. 10.1214/aop/1176988284MathSciNetView ArticleGoogle Scholar
  16. Song Z, Sun W, Yang S, Zhu G: Approximation of weak sense stationary stochastic processes from local averages. Sci. China Ser. A 2007, 50(4):457–463.MathSciNetView ArticleGoogle Scholar
  17. Song Z, Sun W, Zhou X, Hou Z: An average sampling theorem for bandlimited stochastic processes. IEEE Trans. Inf. Theory 2007, 53(12):4798–4800.MathSciNetView ArticleGoogle Scholar
  18. Song Z, Wang P, Xie W: Approximation of second-order moment processes from local averages. J. Inequal. Appl. 2009., 2009: Article ID 154632Google Scholar
  19. He G, Song Z: Approximation of WKS sampling theorem on random signals. Numer. Funct. Anal. Optim. 2011, 32(4):397–408. 10.1080/01630563.2011.556287MathSciNetView ArticleGoogle Scholar
  20. Gröchenig K: Reconstruction algorithms in irregular sampling. Math. Comput. 1992, 45(199):181–194.View ArticleGoogle Scholar
  21. Djokovic I, Vaidyanathan PP: Generalized sampling theorems in multiresolution subspaces. IEEE Trans. Signal Process. 1997, 45(3):583–599. 10.1109/78.558473View ArticleGoogle Scholar
  22. Aldroubi A, Sun Q, Tang WS: Convolution, average sampling, and a Calderon resolution of the identity for shift-invariant spaces. J. Fourier Anal. Appl. 2005, 11(2):215–244. 10.1007/s00041-005-4003-3MathSciNetView ArticleGoogle Scholar
  23. Sun W, Zhou X: Reconstruction of bandlimited functions from local averages. Constr. Approx. 2002, 18(2):205–222. 10.1007/s00365-001-0011-yMathSciNetView ArticleGoogle Scholar
  24. Selva J: Regularized sampling of multiband signals. IEEE Trans. Signal Process. 2010, 58(11):5624–5638.MathSciNetView ArticleGoogle Scholar
  25. Hong YM, Pfander GE: Irregular and multi-channel sampling of operators. Appl. Comput. Harmon. Anal. 2010, 29: 214–231. 10.1016/j.acha.2009.10.006MathSciNetView ArticleGoogle Scholar
  26. Micchelli CA, Xu Y, Zhang H: Optimal learning of bandlimited functions from localized sampling. J. Complex. 2009, 25: 85–114. 10.1016/j.jco.2009.02.005MathSciNetView ArticleGoogle Scholar
  27. Pourahmadi M: A sampling theorem for multivariate stationary processes. J. Multivar. Anal. 1983, 13(1):177–186. 10.1016/0047-259X(83)90012-XMathSciNetView ArticleGoogle Scholar
  28. Pogány TK, Peruničić PM: On the multidimensional sampling theorem. Glas. Mat. 2001, 36(1):155–167.MathSciNetGoogle Scholar
  29. Olenko A, Pogány T: Average sampling restoration of harmonizable processes. Commun. Stat., Theory Methods 2011, 40(19–20):3587–3598. 10.1080/03610926.2011.581180View ArticleGoogle Scholar
  30. Butzer PL: The Hausdorff-Young theorems of Fourier analysis and their impact. J. Fourier Anal. Appl. 1994, 1(2):113–130. 10.1007/s00041-001-4006-7MathSciNetView ArticleGoogle Scholar

Copyright

© Song; licensee Springer 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.