# Slepian’s inequality for Gaussian processes with respect to weak majorization

## Abstract

In this paper, we obtain some sufficient conditions for Slepian’s inequality for Gaussian processes with respect to weak majorization. For our results, we also provide an application example.

MSC:60E15, 62G30.

## 1 Introduction and main results

Gaussian processes are natural extensions of multivariate Gaussian random variables to infinite (countably or continuous) index sets. For Gaussian processes, strong and weak stationarity are the same concept. Gaussian processes are by far the most accessible and well-understood processes (on uncountable index sets), which are important in statistical modeling because of properties inherited from the normal one, and many deep theoretical analyses of various properties are available.

Let $X=\left({X}_{1},\dots ,{X}_{n}\right)$ and ${X}^{\ast }=\left({X}_{1}^{\ast },\dots ,{X}_{n}^{\ast }\right)$ be two centered Gaussian random vectors with covariance matrices $\mathrm{\Sigma }=\left({\sigma }_{ij}\right)$ and ${\mathrm{\Sigma }}^{\ast }=\left({\sigma }_{ij}^{\ast }\right)$, respectively. The well-known Slepian inequality [1] states that if ${\sigma }_{ii}={\sigma }_{ii}^{\ast }$ and ${\sigma }_{ij}\le {\sigma }_{ij}^{\ast }$ for every $i,j=1,\dots ,n$, then for any $x\in R$,

$P\left(\underset{1\le i\le n}{min}{X}_{i}\ge x\right)\le P\left(\underset{1\le i\le n}{min}{X}_{i}^{\ast }\ge x\right),\phantom{\rule{2em}{0ex}}P\left(\underset{1\le i\le n}{max}{X}_{i}\le x\right)\le P\left(\underset{1\le i\le n}{max}{X}_{i}^{\ast }\le x\right).$

Slepian’s inequality and its modifications are an essential ingredient in the proofs of many results being concerned with sample path properties of Gaussian processes. See, e.g., Adler and Taylor [2] and Maurer [3]. Some sufficient conditions for Slepian’s inequality with respect to majorization for two Gaussian random vectors have been given in Fang and Zhang [4].

Majorization is a pre-ordering on vectors by sorting all components in nonincreasing order, which is a very interesting topic in various fields of mathematics and statistics. The history of investigating majorization should date back to Schur [5] and Hardy et al. [6]. The reader can find that majorization has been connected with combinatorics, analytic inequalities, numerical analysis, matrix theory, probability and statistics in Marshall and Olkin [7]. Recent research on majorization with respect to matrix inequalities and norm inequalities has been carried out by Ando [8].

In this paper, we establish four Slepian’s inequalities for Gaussian processes with respect to weak majorization, with their proofs and an application given in Section 2. Firstly, we recall the definitions of majorization and weak majorization.

Definition 1.1 (Marshall and Olkin [7])

Let $\mathbit{\lambda }=\left({\lambda }_{1},{\lambda }_{2},\dots ,{\lambda }_{n}\right)$, ${\mathbit{\lambda }}^{\ast }=\left({\lambda }_{1}^{\ast },{\lambda }_{2}^{\ast },\dots ,{\lambda }_{n}^{\ast }\right)$ denote two n-dimensional real vectors. Let ${\lambda }_{\left[1\right]}\ge {\lambda }_{\left[2\right]}\ge \cdots \ge {\lambda }_{\left[n\right]}$ and ${\lambda }_{\left[1\right]}^{\ast }\ge {\lambda }_{\left[2\right]}^{\ast }\ge \cdots \ge {\lambda }_{\left[n\right]}^{\ast }$ denote the components of λ and ${\mathbit{\lambda }}^{\ast }$ in decreasing order respectively. Similarly, let ${\lambda }_{\left(1\right)}\le {\lambda }_{\left(2\right)}\le \cdots \le {\lambda }_{\left(n\right)}$ and ${\lambda }_{\left(1\right)}^{\ast }\le {\lambda }_{\left(2\right)}^{\ast }\le \cdots \le {\lambda }_{\left(n\right)}^{\ast }$ denote the components of λ and ${\mathbit{\lambda }}^{\ast }$ in increasing order respectively.

1. (1)

${\mathbit{\lambda }}^{\ast }$ is said to be majorized by λ, in symbols $\mathbit{\lambda }{⪰}_{m}{\mathbit{\lambda }}^{\ast }$, if

$\sum _{i=1}^{m}{\lambda }_{\left[i\right]}\ge \sum _{i=1}^{m}{\lambda }_{\left[i\right]}^{\ast }$

for $m=1,2,\dots ,n-1$, and ${\sum }_{i=1}^{n}{\lambda }_{i}={\sum }_{i=1}^{n}{\lambda }_{i}^{\ast }$.

1. (2)

${\mathbit{\lambda }}^{\ast }$ is said to be weak lower majorized by λ, in symbols $\mathbit{\lambda }{⪰}_{w}{\mathbit{\lambda }}^{\ast }$, if

$\sum _{i=1}^{m}{\lambda }_{\left[i\right]}\ge \sum _{i=1}^{m}{\lambda }_{\left[i\right]}^{\ast }$

for $m=1,2,\dots ,n-1$, and ${\sum }_{i=1}^{n}{\lambda }_{i}\ge {\sum }_{i=1}^{n}{\lambda }_{i}^{\ast }$.

1. (3)

${\mathbit{\lambda }}^{\ast }$ is said to be weak upper majorized by λ, in symbols $\mathbit{\lambda }{⪰}^{w}{\mathbit{\lambda }}^{\ast }$, if

$\sum _{i=1}^{m}{\lambda }_{\left(i\right)}\le \sum _{i=1}^{m}{\lambda }_{\left(i\right)}^{\ast }$

for $m=1,2,\dots ,n-1$, and ${\sum }_{i=1}^{n}{\lambda }_{i}\le {\sum }_{i=1}^{n}{\lambda }_{i}^{\ast }$.

The main results of the paper are stated as follows.

Theorem 1.2 Let $X\left(t\right)$ and ${X}^{\ast }\left(t\right)$ be separable Gaussian processes where $t\in \left[0,T\right]$. We assume that the two processes have the same covariance function, i.e.,

$cov\left(X\left(s\right),X\left(t\right)\right)=cov\left({X}^{\ast }\left(s\right),{X}^{\ast }\left(t\right)\right)$

for all $s,t\in \left[0,T\right]$. Denote $u={inf}_{t\in \left[0,T\right]}\left\{EX\left(t\right),E{X}^{\ast }\left(t\right)\right\}$, $v={sup}_{t\in \left[0,T\right]}\left\{EX\left(t\right),E{X}^{\ast }\left(t\right)\right\}$. Let $0\le {t}_{1}\le {t}_{2}\le \cdots \le {t}_{n}\le T$ be a sequence of arbitrary partitions of $\left[0,T\right]$. Let $f:\left[u,v\right]\to R$ be a strictly monotone function, and denote ${\mathbit{\mu }}_{f}=\left(f\left(EX\left({t}_{1}\right)\right),\dots ,f\left(EX\left({t}_{n}\right)\right)\right)$, ${\mathbit{\mu }}_{f}^{\ast }=\left(f\left(E{X}^{\ast }\left({t}_{1}\right)\right),\dots ,f\left(E{X}^{\ast }\left({t}_{n}\right)\right)\right)$.

1. (1)

If ${f}^{\prime }\left(y\right)>0$, ${f}^{″}\left(y\right)\ge 0$ for all $y\in \left[u,v\right]$, and ${\mathbit{\mu }}_{f}{⪰}^{w}{\mathbit{\mu }}_{f}^{\ast }$, then

$P\left(\underset{t\in \left[0,T\right]}{inf}X\left(t\right)\ge x\right)\le P\left(\underset{t\in \left[0,T\right]}{inf}{X}^{\ast }\left(t\right)\ge x\right)$

for all $x\in R$;

1. (2)

If ${f}^{\prime }\left(y\right)<0$, ${f}^{″}\left(y\right)\le 0$ for all $y\in \left[u,v\right]$, and ${\mathbit{\mu }}_{f}{⪰}_{w}{\mathbit{\mu }}_{f}^{\ast }$, then

$P\left(\underset{t\in \left[0,T\right]}{inf}X\left(t\right)\ge x\right)\le P\left(\underset{t\in \left[0,T\right]}{inf}{X}^{\ast }\left(t\right)\ge x\right)$

for all $x\in R$;

1. (3)

If ${f}^{\prime }\left(y\right)>0$, ${f}^{″}\left(y\right)\le 0$ for all $y\in \left[u,v\right]$, and ${\mathbit{\mu }}_{f}{⪰}_{w}{\mathbit{\mu }}_{f}^{\ast }$, then

$P\left(\underset{t\in \left[0,T\right]}{sup}X\left(t\right)\le x\right)\le P\left(\underset{t\in \left[0,T\right]}{sup}{X}^{\ast }\left(t\right)\le x\right)$

for all $x\in R$;

1. (4)

If ${f}^{\prime }\left(y\right)<0$, ${f}^{″}\left(y\right)\ge 0$ for all $y\in \left[u,v\right]$, and ${\mathbit{\mu }}_{f}{⪰}^{w}{\mathbit{\mu }}_{f}^{\ast }$, then

$P\left(\underset{t\in \left[0,T\right]}{sup}X\left(t\right)\le x\right)\le P\left(\underset{t\in \left[0,T\right]}{sup}{X}^{\ast }\left(t\right)\le x\right)$

for all $x\in R$.

In Theorem 1.2, after setting $f\left(x\right)=x$, we can easily get the following result.

Corollary 1.3 Under the same conditions on $X\left(t\right)$, ${X}^{\ast }\left(t\right)$ and $\left\{{t}_{i},1\le i\le n\right\}$ as in Theorem  1.2, the following statements hold.

1. (1)

If $\left(EX\left({t}_{1}\right),\dots ,EX\left({t}_{n}\right)\right){⪰}^{w}\left(E{X}^{\ast }\left({t}_{1}\right),\dots ,E{X}^{\ast }\left({t}_{n}\right)\right)$, then

$P\left(\underset{t\in \left[0,T\right]}{inf}X\left(t\right)\ge x\right)\le P\left(\underset{t\in \left[0,T\right]}{inf}{X}^{\ast }\left(t\right)\ge x\right)$

for all $x\in R$;

1. (2)

If $\left(EX\left({t}_{1}\right),\dots ,EX\left({t}_{n}\right)\right){⪰}_{w}\left(E{X}^{\ast }\left({t}_{1}\right),\dots ,E{X}^{\ast }\left({t}_{n}\right)\right)$, then

$P\left(\underset{t\in \left[0,T\right]}{sup}X\left(t\right)\le x\right)\le P\left(\underset{t\in \left[0,T\right]}{sup}{X}^{\ast }\left(t\right)\le x\right)$

for all $x\in R$.

## 2 Proof and application

Proof of Theorem 1.2 Each of the four conclusions in Theorem 1.2 can be proved by the similar ideas. So, we only give the detailed proof of part (3) here. Let $0\le {t}_{1}\le {t}_{2}\le \cdots \le {t}_{n}\le T$ be a sequence of partitions of $\left[0,T\right]$, and $\tau ={max}_{1\le i\le n}\mathrm{△}{t}_{i}$, so we can obtain Gaussian random variables $X\left({t}_{1}\right),\dots ,X\left({t}_{n}\right)$ and ${X}^{\ast }\left({t}_{1}\right),\dots ,{X}^{\ast }\left({t}_{n}\right)$, respectively. Thus

$\underset{t\in \left[0,T\right]}{sup}X\left(t\right)=\underset{\tau \to 0}{lim}\underset{1\le i\le n}{max}X\left({t}_{i}\right),\phantom{\rule{2em}{0ex}}\underset{t\in \left[0,T\right]}{sup}{X}^{\ast }\left(t\right)=\underset{\tau \to 0}{lim}\underset{1\le i\le n}{max}{X}^{\ast }\left({t}_{i}\right).$

By using the conditions of Theorem 1.2, we know

$cov\left(X\left({t}_{i}\right),X\left({t}_{j}\right)\right)=cov\left({X}^{\ast }\left({t}_{i}\right),{X}^{\ast }\left({t}_{j}\right)\right)$

for all $i,j=1,\dots ,n$. And

$\left(f\left(EX\left({t}_{1}\right)\right),\dots ,f\left(EX\left({t}_{n}\right)\right)\right){⪰}_{w}\left(f\left(E{X}^{\ast }\left({t}_{1}\right)\right),\dots ,f\left(E{X}^{\ast }\left({t}_{n}\right)\right)\right).$

From Fang and Zhang [4], we have

$P\left(\underset{1\le i\le n}{max}X\left({t}_{i}\right)\le x\right)\le P\left(\underset{1\le i\le n}{max}{X}^{\ast }\left({t}_{i}\right)\le x\right).$

Since

$P\left(\underset{t\in \left[0,T\right]}{sup}X\left(t\right)\le x\right)=P\left(\underset{\tau \to 0}{lim}\underset{1\le i\le n}{max}X\left({t}_{i}\right)\le x\right)=\underset{\tau \to 0}{lim}P\left(\underset{1\le i\le n}{max}X\left({t}_{i}\right)\le x\right),$

and

$P\left(\underset{t\in \left[0,T\right]}{sup}{X}^{\ast }\left(t\right)\le x\right)=P\left(\underset{\tau \to 0}{lim}\underset{1\le i\le n}{max}{X}^{\ast }\left({t}_{i}\right)\le x\right)=\underset{\tau \to 0}{lim}P\left(\underset{1\le i\le n}{max}{X}^{\ast }\left({t}_{i}\right)\le x\right).$

According to the above three expressions, we have

$P\left(\underset{t\in \left[0,T\right]}{sup}X\left(t\right)\le x\right)\le P\left(\underset{t\in \left[0,T\right]}{sup}{X}^{\ast }\left(t\right)\le x\right).$

□

### An application

Let $X\left(t\right)={t}^{2}+{B}^{1,H,K}\left(t\right)$ and ${X}^{\ast }\left(t\right)={t}^{3}+{B}^{2,H,K}\left(t\right)$ be Gaussian processes, where ${B}^{i,H,K}\left(t\right)$, $i=1,2$, $H\in \left(0,1\right)$, $K\in \left(0,1\right]$, are centered Gaussian processes such that

$E\left({B}^{i,H,K}\left(t\right){B}^{i,H,K}\left(s\right)\right)=\frac{1}{{2}^{K}}\left[{\left({s}^{2H}+{t}^{2H}\right)}^{K}-{|t-s|}^{2HK}\right]$

for all $s,t\in \left[0,1\right]$.

It is easy to check that $X\left(t\right)$ and ${X}^{\ast }\left(t\right)$ satisfy the conditions in Theorem 1.2.

Let $0\le {t}_{1}\le {t}_{2}\le \cdots \le {t}_{n}\le 1$ be a sequence of partitions of $\left[0,1\right]$, then

$\left({t}_{1}^{2},\dots ,{t}_{n}^{2}\right){⪰}_{w}\left({t}_{1}^{3},\dots ,{t}_{n}^{3}\right){⪰}^{w}\left({t}_{1}^{2},\dots ,{t}_{n}^{2}\right).$

From Corollary 1.3, we have, for all $x\in R$,

$\begin{array}{c}P\left(\underset{t\in \left[0,1\right]}{inf}\left[{t}^{2}+{B}^{1,H,K}\left(t\right)\right]\ge x\right)\ge P\left(\underset{t\in \left[0,1\right]}{inf}\left[{t}^{3}+{B}^{2,H,K}\left(t\right)\right]\ge x\right),\hfill \\ P\left(\underset{t\in \left[0,1\right]}{sup}\left[{t}^{2}+{B}^{1,H,K}\left(t\right)\right]\le x\right)\le P\left(\underset{t\in \left[0,1\right]}{sup}\left[{t}^{3}+{B}^{2,H,K}\left(t\right)\right]\le x\right).\hfill \end{array}$

## References

1. Slepian D: The one-sided barrier problem for Gaussian processes. Bell Syst. Tech. J. 1962, 41: 463–501.

2. Adler RJ, Taylor JE: Random Fields and Geometry. Springer, New York; 2007.

3. Maurer A: Transfer bounds for linear feature learning. Mach. Learn. 2009, 75: 327–350. 10.1007/s10994-009-5109-7

4. Fang L, Zhang X: Slepian’s inequality with respect to majorization. Linear Algebra Appl. 2010, 434: 1107–1118.

5. Schur I: Über eine klasse von mittelbildungen mit anwendungen auf die determinantentheorie. 22. Sitzungsberichte der Berliner Mathematischen Gesellschaft 1923, 9–20.

6. Hardy GH, Littlewood JE, Pólya G: Some simple inequalities satisfied by convex functions. Messenger Math. 1929, 58: 145–152.

7. Marshall AW, Olkin I: Inequalities: Theory of Majorization and Its Applications. Academic Press, New York; 1979.

8. Ando T: Majorization and inequalities in matrix theory. Linear Algebra Appl. 1989, 199: 17–67.

## Acknowledgements

This research is supported by the National Statistical Science Research Project of China (No. 2012LY158), Natural Science Foundation of Anhui-Province (No. 1208085MA11).

## Author information

Authors

### Corresponding author

Correspondence to Longxiang Fang.

### Competing interests

The author declares that they have no competing interests.

## Rights and permissions

Reprints and permissions

Fang, L. Slepian’s inequality for Gaussian processes with respect to weak majorization. J Inequal Appl 2013, 5 (2013). https://doi.org/10.1186/1029-242X-2013-5