Skip to content

Advertisement

Open Access

Slepian’s inequality for Gaussian processes with respect to weak majorization

Journal of Inequalities and Applications20132013:5

https://doi.org/10.1186/1029-242X-2013-5

Received: 12 July 2012

Accepted: 15 December 2012

Published: 4 January 2013

Abstract

In this paper, we obtain some sufficient conditions for Slepian’s inequality for Gaussian processes with respect to weak majorization. For our results, we also provide an application example.

MSC:60E15, 62G30.

Keywords

Slepian’s inequalitymajorizationweak majorizationGaussian processes

1 Introduction and main results

Gaussian processes are natural extensions of multivariate Gaussian random variables to infinite (countably or continuous) index sets. For Gaussian processes, strong and weak stationarity are the same concept. Gaussian processes are by far the most accessible and well-understood processes (on uncountable index sets), which are important in statistical modeling because of properties inherited from the normal one, and many deep theoretical analyses of various properties are available.

Let X = ( X 1 , , X n ) and X = ( X 1 , , X n ) be two centered Gaussian random vectors with covariance matrices Σ = ( σ i j ) and Σ = ( σ i j ) , respectively. The well-known Slepian inequality [1] states that if σ i i = σ i i and σ i j σ i j for every i , j = 1 , , n , then for any x R ,
P ( min 1 i n X i x ) P ( min 1 i n X i x ) , P ( max 1 i n X i x ) P ( max 1 i n X i x ) .

Slepian’s inequality and its modifications are an essential ingredient in the proofs of many results being concerned with sample path properties of Gaussian processes. See, e.g., Adler and Taylor [2] and Maurer [3]. Some sufficient conditions for Slepian’s inequality with respect to majorization for two Gaussian random vectors have been given in Fang and Zhang [4].

Majorization is a pre-ordering on vectors by sorting all components in nonincreasing order, which is a very interesting topic in various fields of mathematics and statistics. The history of investigating majorization should date back to Schur [5] and Hardy et al. [6]. The reader can find that majorization has been connected with combinatorics, analytic inequalities, numerical analysis, matrix theory, probability and statistics in Marshall and Olkin [7]. Recent research on majorization with respect to matrix inequalities and norm inequalities has been carried out by Ando [8].

In this paper, we establish four Slepian’s inequalities for Gaussian processes with respect to weak majorization, with their proofs and an application given in Section 2. Firstly, we recall the definitions of majorization and weak majorization.

Definition 1.1 (Marshall and Olkin [7])

Let λ = ( λ 1 , λ 2 , , λ n ) , λ = ( λ 1 , λ 2 , , λ n ) denote two n-dimensional real vectors. Let λ [ 1 ] λ [ 2 ] λ [ n ] and λ [ 1 ] λ [ 2 ] λ [ n ] denote the components of λ and λ in decreasing order respectively. Similarly, let λ ( 1 ) λ ( 2 ) λ ( n ) and λ ( 1 ) λ ( 2 ) λ ( n ) denote the components of λ and λ in increasing order respectively.
  1. (1)
    λ is said to be majorized by λ, in symbols λ m λ , if
    i = 1 m λ [ i ] i = 1 m λ [ i ]
     
for m = 1 , 2 , , n 1 , and i = 1 n λ i = i = 1 n λ i .
  1. (2)
    λ is said to be weak lower majorized by λ, in symbols λ w λ , if
    i = 1 m λ [ i ] i = 1 m λ [ i ]
     
for m = 1 , 2 , , n 1 , and i = 1 n λ i i = 1 n λ i .
  1. (3)
    λ is said to be weak upper majorized by λ, in symbols λ w λ , if
    i = 1 m λ ( i ) i = 1 m λ ( i )
     

for m = 1 , 2 , , n 1 , and i = 1 n λ i i = 1 n λ i .

The main results of the paper are stated as follows.

Theorem 1.2 Let X ( t ) and X ( t ) be separable Gaussian processes where t [ 0 , T ] . We assume that the two processes have the same covariance function, i.e.,
cov ( X ( s ) , X ( t ) ) = cov ( X ( s ) , X ( t ) )
for all s , t [ 0 , T ] . Denote u = inf t [ 0 , T ] { E X ( t ) , E X ( t ) } , v = sup t [ 0 , T ] { E X ( t ) , E X ( t ) } . Let 0 t 1 t 2 t n T be a sequence of arbitrary partitions of [ 0 , T ] . Let f : [ u , v ] R be a strictly monotone function, and denote μ f = ( f ( E X ( t 1 ) ) , , f ( E X ( t n ) ) ) , μ f = ( f ( E X ( t 1 ) ) , , f ( E X ( t n ) ) ) .
  1. (1)
    If f ( y ) > 0 , f ( y ) 0 for all y [ u , v ] , and μ f w μ f , then
    P ( inf t [ 0 , T ] X ( t ) x ) P ( inf t [ 0 , T ] X ( t ) x )
     
for all x R ;
  1. (2)
    If f ( y ) < 0 , f ( y ) 0 for all y [ u , v ] , and μ f w μ f , then
    P ( inf t [ 0 , T ] X ( t ) x ) P ( inf t [ 0 , T ] X ( t ) x )
     
for all x R ;
  1. (3)
    If f ( y ) > 0 , f ( y ) 0 for all y [ u , v ] , and μ f w μ f , then
    P ( sup t [ 0 , T ] X ( t ) x ) P ( sup t [ 0 , T ] X ( t ) x )
     
for all x R ;
  1. (4)
    If f ( y ) < 0 , f ( y ) 0 for all y [ u , v ] , and μ f w μ f , then
    P ( sup t [ 0 , T ] X ( t ) x ) P ( sup t [ 0 , T ] X ( t ) x )
     

for all x R .

In Theorem 1.2, after setting f ( x ) = x , we can easily get the following result.

Corollary 1.3 Under the same conditions on X ( t ) , X ( t ) and { t i , 1 i n } as in Theorem  1.2, the following statements hold.
  1. (1)
    If ( E X ( t 1 ) , , E X ( t n ) ) w ( E X ( t 1 ) , , E X ( t n ) ) , then
    P ( inf t [ 0 , T ] X ( t ) x ) P ( inf t [ 0 , T ] X ( t ) x )
     
for all x R ;
  1. (2)
    If ( E X ( t 1 ) , , E X ( t n ) ) w ( E X ( t 1 ) , , E X ( t n ) ) , then
    P ( sup t [ 0 , T ] X ( t ) x ) P ( sup t [ 0 , T ] X ( t ) x )
     

for all x R .

2 Proof and application

Proof of Theorem 1.2 Each of the four conclusions in Theorem 1.2 can be proved by the similar ideas. So, we only give the detailed proof of part (3) here. Let 0 t 1 t 2 t n T be a sequence of partitions of [ 0 , T ] , and τ = max 1 i n t i , so we can obtain Gaussian random variables X ( t 1 ) , , X ( t n ) and X ( t 1 ) , , X ( t n ) , respectively. Thus
sup t [ 0 , T ] X ( t ) = lim τ 0 max 1 i n X ( t i ) , sup t [ 0 , T ] X ( t ) = lim τ 0 max 1 i n X ( t i ) .
By using the conditions of Theorem 1.2, we know
cov ( X ( t i ) , X ( t j ) ) = cov ( X ( t i ) , X ( t j ) )
for all i , j = 1 , , n . And
( f ( E X ( t 1 ) ) , , f ( E X ( t n ) ) ) w ( f ( E X ( t 1 ) ) , , f ( E X ( t n ) ) ) .
From Fang and Zhang [4], we have
P ( max 1 i n X ( t i ) x ) P ( max 1 i n X ( t i ) x ) .
Since
P ( sup t [ 0 , T ] X ( t ) x ) = P ( lim τ 0 max 1 i n X ( t i ) x ) = lim τ 0 P ( max 1 i n X ( t i ) x ) ,
and
P ( sup t [ 0 , T ] X ( t ) x ) = P ( lim τ 0 max 1 i n X ( t i ) x ) = lim τ 0 P ( max 1 i n X ( t i ) x ) .
According to the above three expressions, we have
P ( sup t [ 0 , T ] X ( t ) x ) P ( sup t [ 0 , T ] X ( t ) x ) .

 □

An application

Let X ( t ) = t 2 + B 1 , H , K ( t ) and X ( t ) = t 3 + B 2 , H , K ( t ) be Gaussian processes, where B i , H , K ( t ) , i = 1 , 2 , H ( 0 , 1 ) , K ( 0 , 1 ] , are centered Gaussian processes such that
E ( B i , H , K ( t ) B i , H , K ( s ) ) = 1 2 K [ ( s 2 H + t 2 H ) K | t s | 2 H K ]

for all s , t [ 0 , 1 ] .

It is easy to check that X ( t ) and X ( t ) satisfy the conditions in Theorem 1.2.

Let 0 t 1 t 2 t n 1 be a sequence of partitions of [ 0 , 1 ] , then
( t 1 2 , , t n 2 ) w ( t 1 3 , , t n 3 ) w ( t 1 2 , , t n 2 ) .
From Corollary 1.3, we have, for all x R ,
P ( inf t [ 0 , 1 ] [ t 2 + B 1 , H , K ( t ) ] x ) P ( inf t [ 0 , 1 ] [ t 3 + B 2 , H , K ( t ) ] x ) , P ( sup t [ 0 , 1 ] [ t 2 + B 1 , H , K ( t ) ] x ) P ( sup t [ 0 , 1 ] [ t 3 + B 2 , H , K ( t ) ] x ) .

Declarations

Acknowledgements

This research is supported by the National Statistical Science Research Project of China (No. 2012LY158), Natural Science Foundation of Anhui-Province (No. 1208085MA11).

Authors’ Affiliations

(1)
College of Mathematical and Computer Science, Anhui Normal University, Wuhu, China

References

  1. Slepian D: The one-sided barrier problem for Gaussian processes. Bell Syst. Tech. J. 1962, 41: 463–501.MathSciNetView ArticleGoogle Scholar
  2. Adler RJ, Taylor JE: Random Fields and Geometry. Springer, New York; 2007.MATHGoogle Scholar
  3. Maurer A: Transfer bounds for linear feature learning. Mach. Learn. 2009, 75: 327–350. 10.1007/s10994-009-5109-7View ArticleGoogle Scholar
  4. Fang L, Zhang X: Slepian’s inequality with respect to majorization. Linear Algebra Appl. 2010, 434: 1107–1118.MathSciNetView ArticleGoogle Scholar
  5. Schur I: Über eine klasse von mittelbildungen mit anwendungen auf die determinantentheorie. 22. Sitzungsberichte der Berliner Mathematischen Gesellschaft 1923, 9–20.Google Scholar
  6. Hardy GH, Littlewood JE, Pólya G: Some simple inequalities satisfied by convex functions. Messenger Math. 1929, 58: 145–152.Google Scholar
  7. Marshall AW, Olkin I: Inequalities: Theory of Majorization and Its Applications. Academic Press, New York; 1979.MATHGoogle Scholar
  8. Ando T: Majorization and inequalities in matrix theory. Linear Algebra Appl. 1989, 199: 17–67.View ArticleGoogle Scholar

Copyright

© Fang; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement