Maximum likelihood estimators in linear regression models with Ornstein-Uhlenbeck process
© Hu et al.; licensee Springer. 2014
Received: 5 February 2014
Accepted: 9 July 2014
Published: 19 August 2014
The paper studies the linear regression model
with parameters , and the standard Brownian motion. Firstly, the maximum likelihood (ML) estimators of β, λ and are given. Secondly, under general conditions, the asymptotic properties of the ML estimators are investigated. And then, limiting distributions for likelihood ratio test statistics of the hypothesis are also given. Lastly, the validity of the method are illuminated by two real examples.
MSC:62J05, 62M10, 60J60.
Keywordslinear regression model maximum likelihood estimator Ornstein-Uhlenbeck process asymptotic property likelihood ratio test
with parameters , and the standard Brownian motion.
It is well known that a linear regression model is the most important and popular model in the statistical literature, which attracts many people to investigate the model. For an ordinary linear regression model (when the errors are independent and identically distributed (i.i.d.) random variables), Wang and Zhou , Anatolyev , Bai and Guo , Chen , Gil et al. , Hampel et al. , Cui , Durbin  and Li and Yang  used various estimation methods to obtain estimators of the unknown parameters in (1.1) and discussed some large or small sample properties of these estimators. Recently, linear regression with serially correlated errors has attracted increasing attention from statisticians and economists. One case of considerable interest is that the errors are autoregressive processes; Hu , Wu , and Fox and Taqqu  established its asymptotic normality with the usual -normalization in the case of long memory stationary Gaussian observations errors. Giraitis and Surgailis  extended this result to non-Gaussian linear sequences. Koul and Surgailis  established the asymptotic normality of the Whittle estimator in linear regression models with non-Gaussian long memory moving average errors. Shiohama and Taniguchi  estimated the regression parameters in a linear regression model with autoregressive process. Fan  investigated moderate deviations for M-estimators in linear models with ϕ-mixing errors.
This process is now widely used in many areas of application. The main characteristic of the Ornstein-Uhlenbeck process is the tendency to return towards the long-term equilibrium μ. This property, known as mean-reversion, is found in many real life processes, e.g., in commodity and energy price processes (see Fasen , Yu , Geman ). There are a number of papers concerned with the Ornstein-Uhlenbeck process, for example, Janczura et al. , Zhang et al. , Rieder , Iacus , Bishwal , Shimizu , Zhang and Zhang , Chronopoulou and Viens , Lin and Wang  and Xiao et al. . It is well known that the solution of model (1.2) is an autoregressive process. For a constant or functional or random coefficient autoregressive model, many people (for example, Magdalinos , Andrews and Guggenberger , Fan and Yao , Berk , Goldenshluger and Zeevi , Liebscher , Baran et al. , Distaso  and Harvill and Ray ) used various estimation methods to obtain estimators and discussed some asymptotic properties of these estimators, or investigated hypotheses testing.
where is a time-dependent mean reversion level with three parameters. Thus, model (1.3) is a general Ornstein-Uhlenbeck process. Its special cases have gained much attention and have been applied to many fields such as economics, physics, geography, geology, biology and agriculture. Dehling et al.  considered the model with maximum likelihood estimate, and proved strong consistency and asymptotic normality. Lin and Wang  established the existence of a successful coupling for a class of stochastic differential equations given by (1.3). Bishwal  investigated the uniform rate of weak convergence of the minimum contrast estimator in the Ornstein-Uhlenbeck process (1.3).
where i.i.d. random errors and with equidistant time lag d, fixed in advance. Models (1.1) and (1.5) include many special cases such as a linear regression model with constant coefficient autoregressive processes (when ; see Hu , Wu , Maller , Pere  and Fuller ), Ornstein-Uhlenbeck time series or processes (when ; see Rieder , Iacus , Bishwal , Shimizu  and Zhang and Zhang ), constant coefficient autoregressive processes (when , ; see Chambers , Hamilton , Brockwell and Davis  and Abadir and Lucas , etc.).
The paper discusses models (1.1) and (1.5). The organization of the paper is as follows. In Section 2 some estimators of β, θ and are given by the quasi-maximum likelihood method. Under general conditions, the existence and consistency of the quasi-maximum likelihood estimators as well as asymptotic normality are investigated in Section 3. The hypothesis testing is given in Section 4. Some preliminary lemmas are presented in Section 5. The main proofs of theorems are presented in Section 6, with two real examples in Section 7.
2 Estimation method
To obtain our results, the following conditions are sufficient (see Maller ).
where , denotes the maximum in absolute value of the eigenvalues of a symmetric matrix.
For ease of exposition, we shall introduce the following notations which will be used later in the paper.
3 Large sample properties of the estimators
In the following, we will investigate some special cases in models (1.1) and (1.5). From Theorem 3.1 and Theorem 3.2, we obtain the following results. Here we omit their proofs.
4 Hypothesis testing
If , then we use model (1.3), namely models (1.1) and (1.2). If , then we use model (1.4). How to know or ? In the section, we shall consider the question about hypothesis testing and obtain limiting distributions for likelihood ratio (LR) test statistics (see Fan and Jiang ).
Large values of suggest rejection of the null hypothesis.
5 Some lemmas
Throughout this paper, let C denote a generic positive constant which could take different value at each occurrence. To prove our main results, we first introduce the following lemmas.
Lemma 5.2 The matrix is positive definite for large enough n, and .
From (5.8)-(5.10), it follows that . The proof is completed. □
Lemma 5.3 (Maller )
Hence, (5.23) follows from (5.24)-(5.27).
Thus (5.28) follows immediately from (5.31), (5.34)-(5.36), (5.38), (5.40) and (5.41).
Hence, (5.42) follows immediately from (5.43)-(5.45), (5.53), (5.57) and (5.58). This completes the proof of (5.11) from (5.17), (5.23), (5.28) and (5.42).
This follows immediately from (2.20) and the Markov inequality.
This implies (5.13). □
Lemma 5.5 (Hall and Heyde )
where the r.v. Z has the characteristic function .
6 Proof of theorems
where for some .
Since are ML estimators for , is an ML estimator for from (2.9).
To complete the proof, we will show that as . If , then and .