# Probability inequality concerning a chirp-type signal

## Abstract

Probability inequalities of random variables play important roles, especially in the theory of limiting theorems of sums of independent random variables. In this note, we give a proof of the following:

Let ${ ϵ t }$ be a sequence of independent random variables such that $E( ϵ t )=0$, $E( ϵ t 2 )=v<∞$ and $E( ϵ t 4 )=σ<∞$. Then

$max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 = O p ( n β + 7 8 ) .$

It is shown that this result will be useful in estimating the parameters of a chirp-type statistical model and in establishing the consistency of estimators.

## 1 Introduction

In  Whittle (1952) considered the problem of estimating the parameters of a sine wave:

$X t =Acosωt+Bsinωt+ ϵ t ,$

where $X t$’s are the observations and $ϵ t$’s are independent, identically distributed random variables with mean zero and finite(unknown) variance. Whittle’s solution to the problem of estimating the parameters and the proof of consistency of parameters of the above sine wave used arguments which are not mathematically rigorous. In 1973, a rigorous solution to Whittle’s problem was given by Walker . In his proof of consistency of parameters, he used the following $O p$ result:

$max 0 ≤ ω ≤ π | ∑ t = 1 n Y t e i ω t |= O p ( n 3 4 ) ,$

where ${ Y t }$ is a linear process.

In this paper, we extend the above $O p$ result to obtain

$max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 = O p ( n β + 7 8 ) .$
(1)

We came across this problem while attempting to establish the consistency of the parameters of the model

$X t =Acos ( ω t + α t 2 ) +Bsin ( ω t + α t 2 ) + ϵ t ,$

where $X t$’s and $ϵ t$’s are as mentioned above. Models of this type are referred to as ‘chirp’ models [3, 4] and , and they have drawn the attention of many researchers [6, 7] and . Although we tried to find our main result given in (1) in the literature, we were unable to find one. We make use of the following basic results to establish our main result.

## 2 Basic results

Definition 1 Let ${ X n }$ be a sequence of random variables. We say that $X n = O p (1)$ if for given $ϵ>0$, there exists M such that

Let $( a n )$ be a sequence of nonzero real numbers. We say that $X n = O p ( a n )$ if $X n / a n = O p (1)$.

Lemma 1 Let ${ X n }$ be a sequence of random variables, and let ${ a n }$ and ${ b n }$ be two sequence of nonzero real numbers. Then

$X n = O p ( b n )⇒ a n X n = O p ( a n b n ).$

Lemma 2 Let ${ X n }$ be a sequence of random variables, let ${ a n }$ be a sequence of real numbers, and let q be a positive real number. Then

$X n = O p ( a n )⇒ X n q = O p ( a n q ) .$

Definition 2 Let ${ X n }$ be a sequence of random variables. We say that $X n = o p (1)$ if $X n →0$ in probability. (So, $X n = o p (1)$ if for each $ϵ≥0$, we have $lim n → ∞ P(| X n |>ϵ)=0$.)

Lemma 3 Let ${ X n }$ be a sequence of random variables, and let ${ a n }$ be a sequence of real numbers with $a n →0$ as $n→∞$. If $X n = O p (1)$, then $a n X n ⟶ P 0$.

Lemma 4 Let ${ X n }$ be a sequence of random variables, and let β be a positive real number. Then

$X n = O p ( 1 n β ) ⇒ X n = o p (1).$

Lemma 5 Let ${ X n }$ be a sequence of random variables, and let β be any real number. If $X n = O p ( n β )$ and $Y n = O p ( n β )$, then $X n + Y n = O p ( n β )$.

Lemma 6 Let μ be any real number. Let ${ X n }$ be a sequence of random variables such that $E(| X n |)=O( n μ )$. Then $X n =O( n μ )$.

We now establish two lemmas that we need to prove the main result.

Lemma 7 Let $a t , u$; $t=1,2,…,n$, $u=1,2,…,n$ be complex numbers. Then

$∑ t = 1 n ∑ u = 1 n a t , u = ∑ s = 1 n − 1 ∑ t = 1 n − s [ a t , t + s + a t + s , t ]+ ∑ t = 1 n a t , t .$

Proof The $∑ t = 1 n ∑ u = 1 n a t , u$ is the sum of the terms in the table:

$a 1 , 1 a 1 , 2 a 1 , 3 ⋯ a 1 , n − 1 a 1 , n a 2 , 1 a 2 , 2 a 2 , 3 ⋯ a 2 , n − 1 a 2 , n ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ a n − 1 , 1 a n − 1 , 2 a n − 1 , 3 ⋯ a n − 1 , n − 1 a n − 1 , n a n , 1 a n , 2 a n , 3 ⋯ a n , n − 1 a n , n .$

We can add by summing along the diagonals. So, the sum is equal to

$∑ s = 1 n − 1 ∑ t = 1 n − s a t , t + s + ∑ p = 1 n − 1 ∑ u = 1 n − p a u + p , u + ∑ t = 1 n a t , t .$

Therefore

$∑ t = 1 n ∑ u = 1 n a t , u = ∑ s = 1 n − 1 ∑ t = 1 n − s [ a t , t + s + a t + s , t ]+ ∑ t = 1 n a t , t .$

□

Lemma 8 Let $b t$; $t=1,2,…,n$ be complex numbers. Then

$| ∑ t = 1 n b t | 2 ≤2| ∑ s = 1 n − 1 ∑ t = 1 n − s b t b ¯ t + s |+ ∑ t = 1 n | b t | 2 .$

Proof Since $| ∑ t = 1 n b t | 2 = ∑ t = 1 n ∑ u = 1 n b t b ¯ u$, by using $a t , u = b t b ¯ u$ in Lemma 7, then

$| ∑ t = 1 n b t | 2 = ∑ s = 1 n − 1 ∑ t = 1 n − s [ b t b ¯ t + s + b t + s b ¯ t ] + ∑ t = 1 n b t b ¯ t = ∑ s = 1 n − 1 ∑ t = 1 n − s 2 ℜ b t b ¯ t + s + ∑ t = 1 n | b t | 2 ≤ 2 | ∑ s = 1 n − 1 ∑ t = 1 n − s b t b ¯ t + s | + ∑ t = 1 n | b t | 2 .$

□

## 3 Main result

Theorem 1 Let ${ ϵ t }$ be a sequence of independent random variables such that $E( ϵ t )=0$, $E( ϵ t 2 )=v<∞$ and $E( ϵ t 4 )=σ<∞$. Then

$max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 = O p ( n β + 7 8 ) .$

Proof Letting $b t = t β ϵ t e i ( ω t + α t 2 )$ in Lemma 8, we see that

$| ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ≤ 2 | ∑ s = 1 n − 1 ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e i ( ω t + α t 2 ) e − i ( ω ( t + s ) + α ( t + s ) 2 ) | + ∑ t = 1 n | t β ϵ t e i ( ω t + α t 2 ) | 2 = 2 | ∑ s = 1 n − 1 ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e − i ω s e − i α s 2 e − 2 i α t s | + ∑ t = 1 n t 2 β ϵ t 2 .$

Using the triangle inequality, we obtain

$| ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ≤2 ∑ s = 1 n − 1 | ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s |+ ∑ t = 1 n t 2 β ϵ t 2 .$
(2)

Fix s and consider $| ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s |$. By Lemma 8 with $b t = t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s$, we obtain

$| ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s | 2 ≤ 2 | ∑ p = 1 n − s − 1 ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s e − 2 i α t s ϵ t + p ϵ t + p + s e 2 i α s ( t + p ) | + ∑ t = 1 n − s | t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s | 2 ,$

and it follows from the triangle inequality that

$| ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s | 2 ≤ 2 ∑ p = 1 n − s − 1 | ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s | + ∑ t = 1 n − s t β ( t + s ) β ϵ t 2 ϵ t + s 2 .$

Therefore

$| ∑ t = 1 n − s t β ( t + s ) β ϵ t ϵ t + s e − 2 i α t s | 2 ≤ 2 [ ∑ p = 1 n − s − 1 | ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s | + ∑ t = 1 n − s t 2 β ( t + s ) 2 β ϵ t 2 ϵ t + s 2 ] 1 / 2 .$

Substituting this in equation (2), we obtain

$| ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ≤ 2 ∑ s = 1 n − 1 [ 2 ∑ p = 1 n − s − 1 | ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s | + ∑ t = 1 n − s t 2 β ( t + s ) 2 β ϵ t 2 ϵ t + s 2 ] 1 / 2 + ∑ t = 1 n t 2 β ϵ t 2 .$

It follows that

$max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ≤ 2 2 ∑ s = 1 n − 1 [ ∑ p = 1 n − s − 1 | ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s | + ∑ t = 1 n − s t 2 β ( t + s ) 2 β ϵ t 2 ϵ t + s 2 ] 1 / 2 + ∑ t = 1 n t 2 β ϵ t 2 .$

Taking the expectation of both sides of the above inequality, we obtain

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] ≤ 2 2 ∑ s = 1 n − 1 E [ ∑ p = 1 n − s − 1 | ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s | + ∑ t = 1 n − s t 2 β ( t + s ) 2 β ϵ t 2 ϵ t + s 2 ] 1 / 2 + ∑ t = 1 n t 2 β E ( ϵ t 2 ) .$

We claim that

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] =O ( n 2 β + 7 / 4 ) .$

We note that since

$∑ t = 1 n t 2 β E ( ϵ t 2 ) = ∑ t = 1 n t 2 β v=O ( n 2 β + 1 ) ≤O ( n 2 β + 7 / 4 ) ,$

we can ignore the term $∑ t = 1 n t 2 β E( ϵ t 2 )$ provided, and we show the other term is $O( n 2 β + 7 / 4 )$.

Taking $x= | z | 1 / 2$ in Schwarz’s inequality $E|x|≤ ( E | x | 2 ) 1 / 2$ yields $E( | z | 1 / 2 )≤ ( E | z | ) 1 / 2$. Using this on the right-hand side of the above inequality, we get

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] ≤ ∑ s = 1 n − 1 { ∑ p = 1 n − s − 1 E | ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s | + ∑ t = 1 n − s t 2 β ( t + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ) } 1 / 2 .$

Using Schwarz’s inequality again on the first term on the right, we get

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] ≤ ∑ s = 1 n − 1 [ ∑ p = 1 n − s − 1 [ E { ∑ t = 1 n − s − p t β ( t + s ) β ( t + p ) β ( t + p + s ) β ϵ t ϵ t + s ϵ t + p ϵ t + p + s 2 } 2 ] 1 / 2 + ∑ t = 1 n − s t 2 β ( t + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ) ] 1 / 2 = ∑ s = 1 n − 1 [ ∑ p = 1 n − s − 1 [ E ( ∑ t = 1 n − s − p t 2 β ( t + s ) β ( t + p ) 2 β ( t + p + s ) 2 β ϵ t 2 ϵ t + s 2 ϵ t + p 2 ϵ t + p + s 2 ) + E ( 2 ∑ u , v = 1 u < v n − s − p u β ( u + s ) β ( u + p ) β ( u + p + s ) β v β ( v + s ) β ( v + p ) β ( v + p + s ) β ϵ u ϵ u + s ϵ u + p ϵ u + p + s ϵ v ϵ v + s ϵ v + p ϵ v + p + s ) ] 1 / 2 + ∑ t = 1 n − s t 2 β ( t + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ) ] 1 / 2 .$

So,

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] ≤ ∑ s = 1 n − 1 [ ∑ p = 1 n − s − 1 [ ∑ t = 1 n − s − p t 2 β ( t + s ) 2 β ( t + p ) 2 β ( t + p + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ϵ t + p 2 ϵ t + p + s 2 ) + 2 ( ∑ u , v = 1 u < v n − s − p u β ( u + s ) β ( u + p ) β ( u + p + s ) β v β ( v + s ) β ( v + p ) β ( v + p + s ) β E ( ϵ u ϵ u + s ϵ u + p ϵ u + p + s ϵ v ϵ v + s ϵ v + p ϵ v + p + s ) ) ] 1 / 2 + ∑ t = 1 n − s t 2 β ( t + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ) ] 1 / 2 .$

Consider $E( ϵ u ϵ u + s ϵ u + p ϵ u + p + s ϵ v ϵ v + s ϵ v + p ϵ v + p + s )$. Since $u≤v$, $v+p+s$ is the unique largest subscript, the $ϵ t$’s are independent and $E( ϵ t )=0$, we have

$E( ϵ u ϵ u + s ϵ u + p ϵ u + p + s ϵ v ϵ v + s ϵ v + p ϵ v + p + s )=E( ϵ u ϵ u + s ϵ u + p ϵ u + p + s ϵ v ϵ v + s ϵ v + p )E( ϵ v + p + s )=0.$

Therefore

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] ≤ ∑ s = 1 n − 1 [ ∑ p = 1 n − s − 1 [ ∑ t = 1 n − s − p t 2 β ( t + s ) 2 β ( t + p ) 2 β ( t + p + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ϵ t + p 2 ϵ t + p + s 2 ) ] 1 / 2 + ∑ t = 1 n − s t 2 β ( t + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ) ] 1 / 2 .$

Now consider the term $E( ϵ t 2 ϵ t + s 2 )$. Since $E( ϵ t 2 )=v≤∞$, and the $ϵ t$’s are independent,

$∑ t = 1 n − s t 2 β ( t + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ) ≤K n 4 β + 1 .$

Similarly, since $E( ϵ t 2 )=v≤∞$, $E( ϵ t 4 )=σ≤∞$, and the $ϵ t$’s are independent,

$∑ t = 1 n − s − p t 2 β ( t + s ) 2 β ( t + p ) 2 β ( t + p + s ) 2 β E ( ϵ t 2 ϵ t + s 2 ϵ t + p 2 ϵ t + p + s 2 ) =K n 8 β + 1 .$

Thus

$E [ max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 ] ≤ ∑ s = 1 n − 1 [ ∑ p = 1 n − s − 1 [ K n 8 β + 1 ] 1 / 2 + K n 4 β + 1 ] 1 / 2 = ∑ s = 1 n − 1 [ n K n 4 β + 1 / 2 + K n 4 β + 1 ] 1 / 2 = ∑ s = 1 n − 1 [ K n 4 β + 3 / 2 ] 1 / 2 = n K n 2 β + 3 / 4 = O ( n 2 β + 7 / 4 ) .$

It follows by virtue of Lemma 6 that

$max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) | 2 = O p ( n 2 β + 7 / 4 ) .$

Using Lemma 3, we obtain

$max 0 ≤ ω < π 0 ≤ α < π | ∑ t = 1 n t β ϵ t e i ( ω t + α t 2 ) |= O p ( n β + 7 / 8 ) ,$

which completes the proof of the theorem. □

## References

1. 1.

Whittle P: The simultaneous estimation of a time series harmonic component and covariance structure. Trab. Estad. 1952, 3: 43–57. 10.1007/BF03002861

2. 2.

Walker AM: On the estimation of a harmonic component in a time series with stationary dependent residuals. Adv. Appl. Probab. 1973, 5: 217–241. 10.2307/1426034

3. 3.

Bretthorst GL: Lecture Notes in Statistics. Springer, Berlin; 1965.

4. 4.

Smith CR, Erickson GJ: Maximum-Entrophy and Bayesian Spectral Analysis and Estimation Problems. Reidel, Dordrecht; 1987.

5. 5.

Kundu D, Nandi S: Parameter estimation of chirp signals in presence of stationary noise. Stat. Sin. 2008, 18: 187–201.

6. 6.

Cochrane T, Pinner C: An improved Mordell type bound for exponential sums. Proc. Am. Math. Soc. 2004, 133(2):313–320.

7. 7.

Konyagin, SV, Lev, VF: On the Distribution of Exponential Sums. Electron. J. Comb. Number Theory 0, A01 (2000)

8. 8.

Bourgain J, Glibichuk AA, Konyagin SV: Estimates for the number of sums and products and for exponential sums in fields of prime order. J. Lond. Math. Soc. 2005, 73(2):380–398.

## Author information

Authors

### Corresponding author

Correspondence to Kanthi Perera.

### Competing interests

The author declares that she has no competing interests.

## Rights and permissions

Reprints and Permissions

Perera, K. Probability inequality concerning a chirp-type signal. J Inequal Appl 2013, 131 (2013). https://doi.org/10.1186/1029-242X-2013-131 