# The consistency of estimator under fixed design regression model with NQD errors

## Abstract

In this article, based on NQD samples, we investigate the fixed design nonparametric regression model, i.e. ${Y}_{nk}=g\left({x}_{nk}\right)+{\epsilon }_{nk}$ for $1\le k\le n$, where ${\epsilon }_{nk}$ are pairwise NQD random errors, ${x}_{nk}$ are fixed design points, and $g\left(\cdot \right)$ is an unknown function. The nonparametric weighted estimator ${g}_{n}\left(\cdot \right)$ of $g\left(\cdot \right)$ will be introduced and its consistency is studied. As a special case, the consistency result for weighted kernel estimators of the model is established. This extends the earlier work on independent random and dependent random errors to the NQD case.

## 1 Introduction

In regression analysis, it is a common practice to investigate the functional relationship between the responses and design points. The nonparametric regression model provides a useful explanatory and diagnostic tool for this purpose. One may see Müller  and Hardle  for many examples about this and good introductions to the general subject area.

To begin with, consider the fixed design nonparametric regression model

${Y}_{nk}=g\left({x}_{nk}\right)+{\epsilon }_{nk},\phantom{\rule{1em}{0ex}}1\le k\le n.$

Here ${x}_{nk}$, $1\le k\le n$, are known fixed design points, and ${\epsilon }_{nk}$ are random errors. $g\left(\cdot \right)$ is an unknown regression function. As an estimate of $g\left(\cdot \right)$, we consider the following general linear smoother:

${g}_{n}\left(x\right)=\sum _{k=1}^{n}{\omega }_{nk}\left(x\right){Y}_{nk},$

where the weight functions ${\omega }_{nk}\left(x\right)$ depend on $x,{x}_{n1},\dots ,{x}_{nn}$.

It is well known that Georgiev  first proposed the estimator above, and the estimator subsequently have been studied by many authors. A brief review of the theoretic development in recent years is worth mentioning. Results on ${\epsilon }_{nk}$ being assumed to be independent, consistency and asymptotic normality have been investigated by Georgiev  and Müller  among others. Results for the case when the ${\epsilon }_{nk}$ are dependent have also been studied by various authors in recent years. Roussas et al.  established asymptotic normality of ${g}_{n}\left(x\right)$ assuming that the errors are from a strictly stationary stochastic process under the strong mixing condition. Tran et al.  discussed again asymptotic normality of ${g}_{n}\left(x\right)$ assuming that the errors form a weakly stationary linear process with a martingale difference sequence. Hu et al.  gave the mean consistency, complete consistency, and asymptotic normality of regression models based on linear process errors. Under negatively associated sequences, Liang and Jing  presented some asymptotic properties for estimates of nonparametric regression models, and Yang et al.  generalized part results of Liang and Jing  for negatively associated sequences to the case of negatively orthant dependent sequences, and so on.

In this paper, we shall investigate the above nonparametric regression problem under pairwise NQD errors, which means a more general case for sampling.

Definition 1.1 

The pair $\left(X,Y\right)$ of random variables X and Y is said to be NQD (negatively quadrant dependent), if

$P\left(X\le x,Y\le y\right)\le P\left(X\le x\right)P\left(Y\le y\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in {R}^{1}.$
(1.1)

A sequence of random variables $\left\{{X}_{n},n\ge 1\right\}$ is pairwise NQD random variables (NQD in abbreviation), if $\left({X}_{i},{Y}_{j}\right)$ is NQD for every $i\ne j$, $i,j=1,2,\dots$ .

It can be deduced from Definition 1.1 that

$P\left(X>x,Y>y\right)\le P\left(X>x\right)P\left(Y>y\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in {R}^{1}.$
(1.2)

Moreover, it follows that (1.2) also implies (1.1), and hence, (1.1) and (1.2) are actually equivalent.

The definition was introduced by Lehmann , which contains independent random variable, NA (negatively associated) random variable and NOD (negatively orthant dependent) random variable et al. as special cases. For the reason of the wide applications of NQD random variables in reliability theory and applications, the notions of NQD random variables have received many concern recently. Some properties of NQD random variables can be found in Lehmann , and there is much other relevant literature (e.g. Matula , Huang et al. , Sung , Shi , Wang et al. , Li and Yang ).

However, the pairwise NQD structure is more comprehensive than the NA (negative associated) structure and the NOD (negatively orthant dependent) structure. Concerning the study of the theory of pairwise NQD random variables, due to lack of some key technique tools, such as Bernstain type inequality and exponential inequality etc. still unestablished for NQD sequences, investigating the related result is under restraint, especially the estimators of parametric and nonparametric components in regressions model under NQD error’s structure. Hence, extending the asymptotic properties of independent and other dependent random variables to the case of NQD variables is highly desirable and of considerable significance in the theory and in applications.

In the article, based on several related lemmas, we investigate the fixed design nonparametric regression model with NQD errors. The nonparametric estimator ${g}_{n}\left(\cdot \right)$ of $g\left(\cdot \right)$ will be introduced and the usual consistency properties of ${g}_{n}\left(\cdot \right)$, including mean convergence, uniform mean convergence, convergence in probability, etc., are studied under suitable regularity conditions.

The organization of this paper is as follows. In Section 2, we shall present several lemmas for proof of main results, and give the basic assumptions for the nonparametric estimator. We give the further assumption and the main results in Section 3. The proofs of the results will be deferred to Section 4.

## 2 Some lemmas and basic assumptions

### 2.1 Some lemmas

We shall begin with a few preliminary lemmas useful in the proofs of our main results. Firstly, a fact about the properties for the NQD random variables is cited from .

Lemma 2.1 

Let the pair $\left(X,Y\right)$ of random variables X and Y be NQD, then:

1. (1)

$E\left(XY\right)\le EX\cdot EY$.

2. (2)

$P\left(X>x,Y>y\right)\le P\left(X>x\right)P\left(Y>y\right)$, for any $x,y\in {R}^{1}$.

3. (3)

If f, g are both non-decreasing (or non-increasing) functions, then $f\left(X\right)$ and $g\left(X\right)$ are NQD.

Lemma 2.2 

Let $\left\{{X}_{n},n\ge 1\right\}$ be a sequence of pairwise NQD random variables such that $E{X}_{n}=0$, $E{X}_{n}^{2}<\mathrm{\infty }$ for all $n\ge 1$, denote ${T}_{j}\left(k\right)\triangleq {\sum }_{i=j+1}^{j+k}{X}_{i}$, $j\ge 0$, $k\ge 1$, then

$\begin{array}{c}E{\left({T}_{j}\left(k\right)\right)}^{2}\le \sum _{i=j+1}^{j+k}E{X}_{i}^{2},\hfill \\ E\underset{1\le k\le n}{max}{\left({T}_{j}\left(k\right)\right)}^{2}\le \frac{4{log}^{2}n}{{log}^{2}2}\sum _{i=j+1}^{j+k}E{X}_{i}^{2}.\hfill \end{array}$

In the rest below, we assume $0={x}_{n\left(0\right)}\le {x}_{n\left(1\right)}\le {x}_{n\left(2\right)}\le \cdots \le {x}_{n\left(n\right)}=1$ and let ${\delta }_{n}={max}_{1\le k\le n}\left({x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}\right)$. Furthermore, assume that

(A1) $K\left(\cdot \right)$ is bounded and satisfies Lipschitz condition of order α ($\alpha >0$) on ${R}^{1}$, and ${\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}|K\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt<\mathrm{\infty }$;

(A2) ${h}_{n}\to 0$ and ${\delta }_{n}^{\alpha }/{h}_{n}^{1+\alpha }\to 0$ as $n\to \mathrm{\infty }$;

(A3) ${\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}K\left(t\right)\phantom{\rule{0.2em}{0ex}}dt=1$.

Lemma 2.3 If conditions (A1), (A2) hold, then

$\underset{n\to \mathrm{\infty }}{lim}\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|={\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt,\phantom{\rule{1em}{0ex}}x\in \left(0,1\right),$
(2.1)

and for a fixed point $\tau \in \left(0,1/2\right)$,

$\underset{n\to \mathrm{\infty }}{lim}\underset{x\in \left[\tau ,1-\tau \right]}{sup}\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|={\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt.$
(2.2)

Proof of Lemma 2.3 Denote $H\left(x\right)=I\left(0\le x\le 1\right)$, where $I\left(\cdot \right)$ is the usual indicator function, and

$\begin{array}{c}\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|-{\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt\hfill \\ \phantom{\rule{1em}{0ex}}=\left\{\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)H\left({x}_{n\left(k\right)}\right)|-{h}_{n}^{-1}{\int }_{0}^{1}|K\left(\frac{x-t}{{h}_{n}}\right)|H\left(t\right)\phantom{\rule{0.2em}{0ex}}dt\right\}\hfill \\ \phantom{\rule{2em}{0ex}}+\left\{{h}_{n}^{-1}{\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(\frac{x-t}{{h}_{n}}\right)|H\left(t\right)\phantom{\rule{0.2em}{0ex}}dt-{\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt\right\}\hfill \\ \phantom{\rule{1em}{0ex}}\triangleq {T}_{n1}\left(x\right)+{T}_{n2}\left(x\right).\hfill \end{array}$

We look at each term separately. Note that there is ${\theta }_{n\left(k\right)}\in \left(0,1\right)$ ($k=1,2,\dots ,n$) by the mean-value theorem for integrals such that

$\begin{array}{rcl}|{T}_{n1}\left(x\right)|& =& |{h}_{n}^{-1}\sum _{k=1}^{n}{\stackrel{˜}{\delta }}_{n\left(k\right)}\left\{|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|H\left({x}_{n\left(k\right)}\right)\\ -|K\left(\frac{x-{x}_{n\left(k\right)}+{\theta }_{n\left(k\right)}{\stackrel{˜}{\delta }}_{n\left(k\right)}}{{h}_{n}}\right)|H\left({x}_{n\left(k\right)}-{\theta }_{n\left(k\right)}{\stackrel{˜}{\delta }}_{n\left(k\right)}\right)\right\}|\\ \le & {h}_{n}^{-1}\sum _{k=1}^{n}{\stackrel{˜}{\delta }}_{n\left(k\right)}\left\{|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)-K\left(\frac{x-{x}_{n\left(k\right)}+{\theta }_{n\left(k\right)}{\stackrel{˜}{\delta }}_{n,k}}{{h}_{n}}\right)||H\left({x}_{n\left(k\right)}\right)|\\ +|K\left(\frac{x-{x}_{n\left(k\right)}+{\theta }_{n\left(k\right)}{\stackrel{˜}{\delta }}_{n\left(k\right)}}{{h}_{n}}\right)||H\left({x}_{n\left(k\right)}\right)-H\left({x}_{n\left(k\right)}-{\theta }_{n\left(k\right)}{\stackrel{˜}{\delta }}_{n\left(k\right)}\right)|\right\}\\ \le & M{h}_{n}^{-1}\sum _{k=1}^{n}{\stackrel{˜}{\delta }}_{n\left(k\right)}{\left({\theta }_{n\left(k\right)}{\stackrel{˜}{\delta }}_{n\left(k\right)}/{h}_{n}\right)}^{\alpha }\\ \le & M{\left({\delta }_{n}/{h}_{n}\right)}^{\alpha }/{h}_{n},\end{array}$

where ${\stackrel{˜}{\delta }}_{n\left(k\right)}={x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}$, and we use condition (A1) to the second inequality.

Then, according to condition (A2), we conclude

$\underset{n\to \mathrm{\infty }}{lim}|{T}_{n1}\left(x\right)|=0\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\underset{n\to \mathrm{\infty }}{lim}\underset{x\in \left[\tau ,1-\tau \right]}{sup}|{T}_{n1}\left(x\right)|=0.$
(2.3)

As for ${T}_{n2}\left(x\right)$, when $x\in \left(0,1\right)$, we have

$|{T}_{n2}\left(x\right)|\le {\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)||H\left(x\right)-H\left(x-{h}_{n}t\right)|\phantom{\rule{0.2em}{0ex}}dt.$

Note that by the definition of $H\left(\cdot \right)$, ${lim}_{n\to \mathrm{\infty }}H\left(x-{h}_{n}t\right)=H\left(x\right)$ for all $x\in \left(0,1\right)$ and $t\in {R}^{1}$. Under the integrability of $|K\left(t\right)|$, $|{T}_{n2}\left(x\right)|\to 0$, as $n\to \mathrm{\infty }$ by the dominated convergence theorem, which together with (2.3) implies (2.1).

Again because of ${\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}|K\left(t\right)|\phantom{\rule{0.2em}{0ex}}dt<\mathrm{\infty }$ and ${h}_{n}\to 0$, if n large sufficiently, one can choose a sufficient small positive number ${\tau }_{0}$, such that when $|{h}_{n}t|<{\tau }_{0}<\tau$, there is

${\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}|K\left(t\right)|I\left(|t|\ge {\tau }_{0}/{h}_{n}\right)\phantom{\rule{0.2em}{0ex}}dt<\frac{\epsilon }{2}.$

As a result, for $x\in \left[\tau ,1-\tau \right]$, uniformly

$\begin{array}{rcl}|{T}_{n2}\left(x\right)|& \le & {\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)||H\left(x\right)-H\left(x-{h}_{n}t\right)|\left[I\left(|{h}_{n}t|<{\tau }_{0}\right)+I\left(|{h}_{n}t|\ge {\tau }_{0}\right)\right]\phantom{\rule{0.2em}{0ex}}dt\\ \le & 2{\int }_{-\mathrm{\infty }}^{+\mathrm{\infty }}|K\left(t\right)|I\left(|{h}_{n}t|\ge {\tau }_{0}\right)\phantom{\rule{0.2em}{0ex}}dt<\epsilon .\end{array}$

Therefore,

Combining (2.3), then (2.2) holds, as we wanted to show. This completes the proof. □

Lemma 2.4 If conditions (A1), (A2), (A3) hold, then

$\underset{n\to \mathrm{\infty }}{lim}\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)=1,\phantom{\rule{1em}{0ex}}x\in \left(0,1\right),$
(2.4)

and for a fixed point $\tau \in \left(0,1/2\right)$,

$\underset{n\to \mathrm{\infty }}{lim}\underset{x\in \left[\tau ,1-\tau \right]}{sup}|\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)-1|=0.$
(2.5)

Proof of Lemma 2.4 The proof is similar to those of Lemma 2.3 with $|K\left(t\right)|$ replaced by $K\left(t\right)$ and using condition (A3), so is omitted here. □

### 2.2 Basic assumptions

Unless otherwise specified, we assume throughout the paper that the random sample $\left({x}_{nk},{Y}_{nk}\right)$ for $1\le k\le n$ come from the regression model

${Y}_{nk}=g\left({x}_{nk}\right)+{\epsilon }_{nk},\phantom{\rule{1em}{0ex}}1\le k\le n,$
(2.6)

where $\left\{{\epsilon }_{nk},1\le k\le n\right\}$ from a sequence of zero mean random errors with the same distribution as $\left\{{\epsilon }_{k},1\le k\le n\right\}$ for each n, $\left\{{x}_{nk},1\le k\le n\right\}$ are known fixed design points from a compact set A in ${R}^{d}$ (d is a positive integer), and $g\left(\cdot \right)$ is an unknown real valued regression function and assumed to be bounded on the compact set A.

The present paper investigates the general linear smoother as an estimate of $g\left(\cdot \right)$ in the following, defined by

${g}_{n}\left(x\right)=\sum _{k=1}^{n}{\omega }_{nk}\left(x\right){Y}_{nk},$

where the array of weight functions ${\omega }_{nk}\left(x\right)$, $1\le k\le n$ depends on the fixed design points $x,{x}_{n1},\dots ,{x}_{nn}$ and on the number of observations n, for which ${\omega }_{nk}\left(x\right)=0$ for $k>n$.

In the following section, we denote all continuity points of the function $g\left(\cdot \right)$ on set A as $C\left(g\right)$. Let the symbol $\parallel x\parallel$ be the Euclidean norm of x, M a generic positive constant in the sequel, which could take different values at different places.

## 3 Main results

We shall establish two different models of consistency for the nonparametric regression estimate ${g}_{n}\left(x\right)$ at a fixed point x. First, we give some assumptions on the weight function ${\omega }_{nk}\left(x\right)$ in the following. Similar assumptions on the weighted functions can be found in Georgiev , Hu et al. , Liang and Jing  and Yang et al. , etc. We have

(B1) ${\sum }_{k=1}^{n}{\omega }_{nk}\left(x\right)\to 1$, as $n\to \mathrm{\infty }$;

(B2) ${\sum }_{k=1}^{n}|{\omega }_{nk}\left(x\right)|\le M$, n;

(B3) ${\sum }_{k=1}^{n}{\omega }_{nk}^{2}\left(x\right)\to 0$, as $n\to \mathrm{\infty }$;

(B4) ${\sum }_{k=1}^{n}|{\omega }_{nk}\left(x\right)|I\left(\parallel {x}_{nk}-x\parallel >a\right)\to 0$, as $n\to \mathrm{\infty }$, for $a>0$.

The weights ${\omega }_{nk}\left(x\right)$, $1\le k\le n$, in the assumptions are relatively extensive in practice, which can easily be satisfied by the commonly adopted weights used, such as the well-known nearest neighbor weights.

Example 3.1 Let $g\left(\cdot \right)$ be continuous on interval $A\triangleq \left[0,1\right]$. Without loss of generality, put ${x}_{nk}=k/n$, $1\le k\le n$. When $|{x}_{ni}-x|=|{x}_{nj}-x|$, assume that $|{x}_{ni}-x|$ is ahead of $|{x}_{nj}-x|$ for ${x}_{ni}<{x}_{nj}$, then a permutation for $|{x}_{n1}-x|,|{x}_{n2}-x|,\dots ,|{x}_{nn}-x|$ can be given as follows:

$|{x}_{{R}_{1}\left(x\right)}^{\left(n\right)}-x|\le |{x}_{{R}_{2}\left(x\right)}^{\left(n\right)}-x|\le \cdots \le |{x}_{{R}_{n}\left(x\right)}^{\left(n\right)}-x|,\phantom{\rule{1em}{0ex}}x\in A.$

Let ${k}_{n}=o\left(n\right)$; we define the nearest neighbor weight as

${\omega }_{nk}\left(x\right)={k}_{n}^{-1}I\left(|{x}_{nk}-x|\le |{x}_{{R}_{{k}_{n}}\left(x\right)}^{\left(n\right)}-x|\right).$

Then one can easily verify by the choice of ${x}_{ni}$ and the definition of ${R}_{i}\left(x\right)$ that conditions (B1)-(B4) are satisfied.

We now state our first result for the mean convergence of ${g}_{n}\left(x\right)$, which, on the opinion of statistics, is asymptotically unbiased of $g\left(x\right)$ in the proof of Theorem 3.1.

Theorem 3.1 (Mean convergence)

Assume that conditions (B1)-(B4) hold. Let $\left\{{\epsilon }_{n},n\ge 1\right\}$ be mean zero pairwise NQD sequences with ${sup}_{n\ge 1}E{\epsilon }_{n}^{2}<\mathrm{\infty }$, if $0, then

$\underset{n\to \mathrm{\infty }}{lim}E|{g}_{n}\left(x\right)-g\left(x\right){|}^{p}=0,$
(3.1)

for $\mathrm{\forall }x\in C\left(g\right)$.

Another similar form of mean convergence, by using the inequality ${\left({\sum }_{k=1}^{n}|{a}_{k}{|}^{\beta }\right)}^{1/\beta }\le {\left({\sum }_{k=1}^{n}|{a}_{k}{|}^{\alpha }\right)}^{1/\alpha }$, $1\le \alpha \le \beta$, for any real number sequence $\left\{{a}_{k},1\le k\le n\right\}$, is the following.

Theorem 3.1′ (Mean convergence)

Assume that conditions (B1), (B2), and (B4) hold. Let $\left\{{\epsilon }_{n},n\ge 1\right\}$ be mean zero pairwise NQD sequences with ${sup}_{n\ge 1}E|{\epsilon }_{n}{|}^{p}<\mathrm{\infty }$ for some $1, if ${\sum }_{k=1}^{n}|{\omega }_{nk}^{s}\left(x\right)|\to 0$, as $n\to \mathrm{\infty }$, with $1; then (3.1) holds for $\mathrm{\forall }x\in C\left(g\right)$.

For any fixed point x on a compact set A in ${R}^{d}$ ($d\ge 1$), in order to obtaining uniform convergence for the estimator of $g\left(x\right)$, several uniform version of conditions on ${\omega }_{nk}\left(x\right)$ are necessarily replaced by the following:

(${\mathrm{B}}_{1}^{\prime }$) ${sup}_{x\in A}|{\sum }_{k=1}^{n}{\omega }_{nk}\left(x\right)-1|\to 0$, as $n\to \mathrm{\infty }$;

(${\mathrm{B}}_{2}^{\prime }$) ${sup}_{x\in A}|{\sum }_{k=1}^{n}{\omega }_{nk}\left(x\right)|\le M$, n;

(${\mathrm{B}}_{3}^{\prime }$) ${sup}_{x\in A}{\sum }_{k=1}^{n}{\omega }_{nk}^{2}\left(x\right)\to 0$, as $n\to \mathrm{\infty }$;

(${\mathrm{B}}_{4}^{\prime }$) ${sup}_{x\in A}{\sum }_{k=1}^{n}|{\omega }_{nk}\left(x\right)|I\left(\parallel {x}_{nk}-x\parallel >a\right)\to 0$, as $n\to \mathrm{\infty }$, for $a>0$.

Then we are in the position to give the following result.

Theorem 3.2 (Uniform mean convergence)

Assume that conditions (${\mathrm{B}}_{1}^{\prime }$)-(${\mathrm{B}}_{4}^{\prime }$) hold. Let $g\left(\cdot \right)$ be continuous on a compact set A, $\left\{{\epsilon }_{n},n\ge 1\right\}$, a mean zero pairwise NQD sequence. If ${sup}_{n\ge 1}E{\epsilon }_{n}^{2}<\mathrm{\infty }$, then

$\underset{n\to \mathrm{\infty }}{lim}\underset{x\in A}{sup}E|{g}_{n}\left(x\right)-g\left(x\right){|}^{p}=0,$
(3.2)

for $0.

Remark 3.1 Since the NA sequence and the NOD sequence are an NQD sequence, we generalize some results of Liang and Jing  and Yang et al.  to the case of NQD errors, respectively. As a consequence, one may get a consistency property for the weighted kernel estimators in the model (2.6).

Corollary 3.1 Assume that conditions (A1), (A2), (A3) hold, and

Let $\left\{{\epsilon }_{n},n\ge 1\right\}$ be mean zero pairwise NQD sequences with ${sup}_{n\ge 1}E{\epsilon }_{n}^{2}<\mathrm{\infty }$, $g\left(\cdot \right)$ a continuous function on interval $\left(0,1\right)$, if $0, then

$\underset{n\to \mathrm{\infty }}{lim}E|\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right){Y}_{nk}-g\left(x\right){|}^{p}=0,$
(3.3)

for $\mathrm{\forall }x\in \left(0,1\right)$.

Furthermore, put $\tau \in \left(0,1/2\right)$; if (A4) is replaced by

${\left({\mathrm{A}}_{4}\right)}^{\prime }\phantom{\rule{1em}{0ex}}\underset{x\in \left[\tau ,1-\tau \right]}{sup}\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|I\left(|{x}_{n\left(k\right)}-x|>a\right)\to 0,$

as $n\to \mathrm{\infty }$, for $a>0$, then

$\underset{n\to \mathrm{\infty }}{lim}\underset{x\in \left[\tau ,1-\tau \right]}{sup}E|\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right){Y}_{nk}-g\left(x\right){|}^{p}=0,$
(3.4)

for $0.

Next, we shall give a weak consistency for the estimator of $g\left(x\right)$ under the existence of an absolute mean for the variable.

Theorem 3.3 (Convergence in probability)

Assume that conditions (B1), (B2), (B4) hold. Let $\left\{{\epsilon }_{n},n\ge 1\right\}$ be mean zero pairwise NQD sequences and uniformly bounded by a random variable X in the sense that ${sup}_{n\ge 1}P\left(|{\epsilon }_{n}|\ge x\right)\le P\left(|X|\ge x\right)$ for all $x>0$. If $E|X|<\mathrm{\infty }$ and ${sup}_{k}|{\omega }_{nk}\left(x\right)|=o\left(1\right)$, then

(3.5)

for $\mathrm{\forall }x\in C\left(g\right)$.

Corollary 3.2 Assume that conditions (A1)-(A4) hold, and let $\left\{{\epsilon }_{n},n\ge 1\right\}$ be a mean zero pairwise NQD sequence and uniformly bounded by a random variable X in the sense that ${sup}_{n\ge 1}P\left(|{\epsilon }_{n}|\ge x\right)\le P\left(|X|\ge x\right)$ for all $x>0$, $g\left(\cdot \right)$ a continuous function on interval $\left(0,1\right)$. If $E|X|<\mathrm{\infty }$ and ${\delta }_{n}{h}_{n}^{-1}=o\left(1\right)$, then

for $\mathrm{\forall }x\in \left(0,1\right)$.

## 4 Proofs for main results

Proof of Theorem 3.1 We write firstly the triangle inequality that

$E|{g}_{n}\left(x\right)-g\left(x\right){|}^{p}\le ME\left\{|{g}_{n}\left(x\right)-E{g}_{n}\left(x\right){|}^{p}+|E{g}_{n}\left(x\right)-g\left(x\right){|}^{p}\right\}.$
(4.1)

By Jensen’s inequality, Lemma 2.2, combine condition (B3) and ${sup}_{n\ge 1}E{\epsilon }_{n}^{2}<\mathrm{\infty }$, and note that $\left\{{\epsilon }_{nk},1\le k\le n\right\}$ has the same distribution as $\left\{{\epsilon }_{k},1\le k\le n\right\}$; then we have for $0

(4.2)

$x\in C\left(g\right)$, since $\left\{{\omega }_{nk}\left(x\right){\epsilon }_{k},1\le k\le n\right\}$ are also pairwise NQD sequences according to Lemma 2.1.

Meanwhile, for the bias $E{g}_{n}\left(x\right)-g\left(x\right)$, choose a number $a>0$, and we get the following upper bound:

$\begin{array}{rcl}|E{g}_{n}\left(x\right)-g\left(x\right)|& \le & \sum _{k=1}^{n}|{\omega }_{nk}\left(x\right)|\cdot |g\left({x}_{nk}\right)-g\left(x\right)|\left[I\left(\parallel {x}_{nk}-x\parallel \le a\right)\\ +I\left(\parallel {x}_{nk}-x\parallel >a\right)\right]+|\sum _{k=1}^{n}{\omega }_{nk}\left(x\right)-1|\cdot |g\left(x\right)|,\phantom{\rule{1em}{0ex}}x\in C\left(g\right).\end{array}$

Because of $x\in C\left(g\right)$, for any $\epsilon >0$, there exists a $\delta >0$ such that $|g\left({x}_{nk}\right)-g\left(x\right)|<\epsilon$ whenever $\parallel {x}_{nk}-x\parallel <\delta$. Thus, by setting $0, conditions (B1), (B2), (B4) together with the arbitrariness of $\epsilon >0$ imply that the estimate ${g}_{n}\left(\cdot \right)$ is asymptotically unbiased for $g\left(\cdot \right)$, and then

$|E{g}_{n}\left(x\right)-g\left(x\right){|}^{p}\to 0,\phantom{\rule{1em}{0ex}}n\to \mathrm{\infty },x\in C\left(g\right).$
(4.3)

Therefore, we can deduce from (4.1), (4.2), (4.3) that (3.1) follows, and this ends the proof. □

Proof of Theorem 3.2 Note that on a compact set A, $g\left(\cdot \right)$ is uniformly continuous if it is continuous. Then, similar to the proof of Theorem 3.1, (3.2) can be obtained by the fact that

$\underset{x\in A}{sup}E|{g}_{n}\left(x\right)-g\left(x\right){|}^{p}\le M\left\{\underset{x\in A}{sup}E|{g}_{n}\left(x\right)-E{g}_{n}\left(x\right){|}^{p}+\underset{x\in A}{sup}|E{g}_{n}\left(x\right)-g\left(x\right){|}^{p}\right\},$

tending to zero if $n\to \mathrm{\infty }$. This completes the proof. □

Proof of Corollary 3.1 Note that we have

$\begin{array}{c}\sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}|K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|\le M,\phantom{\rule{1em}{0ex}}\mathrm{\forall }n,\hfill \\ \sum _{k=1}^{n}\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)\to 1,\phantom{\rule{1em}{0ex}}x\in \left(0,1\right),\hfill \end{array}$

by Lemma 2.3 and Lemma 2.4, respectively.

And under condition (A2),

$\sum _{k=1}^{n}{\left(\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)\right)}^{2}\le M\frac{{\delta }_{n}}{{h}_{n}}\sum _{k=1}^{n}|\frac{{x}_{n\left(k\right)}-{x}_{n\left(k-1\right)}}{{h}_{n}}K\left(\frac{x-{x}_{n\left(k\right)}}{{h}_{n}}\right)|\to 0,$

as $n\to \mathrm{\infty }$.

Consequently, according to Theorem 3.1, (3.3) follows.

As to (3.4), similar to above, one may verify, on the interval $\left[\tau ,1-\tau \right]$, the condition of Theorem 3.2 by the second result of Lemma 2.3 and Lemma 2.4, i.e. (2.2) and (2.5), respectively. This ends the proof. □

Proof of Theorem 3.3 Since $x\in C\left(g\right)$, by the same reasoning as before, for any positive number ε, there is a number $\delta >0$, when $a\in \left(0,\delta \right)$, such that one may find that $|E{g}_{n}\left(x\right)-g\left(x\right)|$ tends to zero by arbitrariness of $\epsilon >0$ and conditions (B1), (B2), (B4). For proving (3.5), note that

$|{g}_{n}\left(x\right)-g\left(x\right)|\le |{g}_{n}\left(x\right)-E{g}_{n}\left(x\right)|+|E{g}_{n}\left(x\right)-g\left(x\right)|.$
(4.4)

We now prove that the random part of r.h.s. in (4.4) tends to zero in probability as $n\to \mathrm{\infty }$. To this end, observe that

$|{g}_{n}\left(x\right)-E{g}_{n}\left(x\right)|=|\sum _{k=1}^{n}{\omega }_{nk}\left(x\right){\epsilon }_{nk}|.$

Next we introduce truncated variables:

$\begin{array}{c}{X}_{nk}^{\left(1\right)}\left(x\right)\triangleq -I\left({\omega }_{nk}\left(x\right){\epsilon }_{nk}\le -1\right)+{\omega }_{nk}\left(x\right){\epsilon }_{nk}I\left(|{\omega }_{nk}\left(x\right){\epsilon }_{nk}|<1\right)+I\left({\omega }_{nk}\left(x\right){\epsilon }_{nk}\ge 1\right),\hfill \\ {X}_{nk}\left(x\right)\triangleq {\omega }_{nk}\left(x\right){\epsilon }_{nk}I\left(|{\omega }_{nk}\left(x\right){\epsilon }_{nk}|<1\right)\hfill \end{array}$

and

${S}_{n}\left(x\right)\triangleq \sum _{k=1}^{n}{\omega }_{nk}\left(x\right){\epsilon }_{nk},\phantom{\rule{2em}{0ex}}{S}_{n}^{\left(1\right)}\left(x\right)\triangleq \sum _{k=1}^{n}{X}_{nk}^{\left(1\right)}\left(x\right).$

From $E|X|<\mathrm{\infty }$, it follows that

Then for $\mathrm{\forall }x\in C\left(g\right)$, when $n\to \mathrm{\infty }$,

$P\left({S}_{n}^{\left(1\right)}\left(x\right)\ne {S}_{n}\left(x\right)\right)\le \sum _{k=1}^{n}P\left(|X|\ge |{\omega }_{nk}\left(x\right){|}^{-1}\right)\to 0,$

since condition (B2) and ${sup}_{k}|{\omega }_{nk}\left(x\right)|=o\left(1\right)$.

It suffices to show that ${S}_{n}^{\left(1\right)}\left(x\right)$ converges to zero in probability for $\mathrm{\forall }x\in C\left(g\right)$. Observe that $\left\{{X}_{nk}^{\left(1\right)}\left(x\right),1\le k\le n\right\}$ is also a sequence of pairwise NQD random variables by Lemma 2.1, hence by the Chebyshev inequality,

$\begin{array}{rcl}P\left(|{S}_{n}^{\left(1\right)}\left(x\right)|>\epsilon \right)& \le & {\epsilon }^{-2}\sum _{k=1}^{n}E{\left({X}_{nk}^{\left(1\right)}\left(x\right)\right)}^{2}I\left(|\sum _{k=1}^{n}{X}_{nk}^{\left(1\right)}\left(x\right)|>\epsilon \right)\\ \le & {\epsilon }^{-2}\left[\sum _{k=1}^{n}P\left(|{\omega }_{nk}\left(x\right){\epsilon }_{nk}|\ge 1\right)+\sum _{k=1}^{n}{\omega }_{nk}^{2}\left(x\right)E{\epsilon }_{nk}^{2}I\left(|{\omega }_{nk}\left(x\right){\epsilon }_{nk}|<1\right)\right]\\ \triangleq & {\epsilon }^{-2}\left({I}_{n1}+{I}_{n2}\right),\end{array}$
(4.5)

where the first inequality is due to Lemma 2.2.

When we come to ${I}_{n1}$, we have

${I}_{n1}\le \sum _{k=1}^{n}P\left(|{\omega }_{nk}\left(x\right)X|\ge 1\right)\le {sup}_{k}|{\omega }_{nk}\left(x\right)|\sum _{k=1}^{n}E|X|I\left(|{\omega }_{nk}\left(x\right)X|\ge 1\right)\to 0,$
(4.6)

as $n\to \mathrm{\infty }$.

Now, choose a number s such that for $t\ge s$, $P\left(|X|\ge t\right)\le \epsilon {t}^{-1}$. We have

${u}^{-1}{\int }_{0}^{u}tP\left(|{\epsilon }_{k}|\ge t\right)\phantom{\rule{0.2em}{0ex}}dt\le {u}^{-1}{\int }_{0}^{u}tP\left(|X|\ge t\right)\phantom{\rule{0.2em}{0ex}}dt\le {u}^{-1}M+\epsilon ,$

which means that ${u}^{-1}{\int }_{0}^{u}tP\left(|{\epsilon }_{k}|\ge t\right)\phantom{\rule{0.2em}{0ex}}dt\to 0$ when $u\to \mathrm{\infty }$.

Again

$0\le {\int }_{|t|

Therefore, it follows that

(4.7)

Hence, Theorem 3.3 follows from (4.4)-(4.7). This ends the proof. □

Proof of Corollary 3.2 By the discussion in Corollary 3.1, it is a direct result of Theorem 3.3. This completes the proof of Corollary 3.2. □

## References

1. Müller HG: Nonparametric Analysis of Longitudinal Data. Springer, Berlin; 1988.

2. Hardle W: Applied Nonparametric Regression. Cambridge University Press, New York; 1990.

3. Georgiev AA: Local properties of function fitting estimates with applications to system identification. In Mathematical Statistics and Applications. Proceedings of the 4th Pannonian Symposium on Mathematical Statistics, Bad Tatzmannsdorf, Austria, 4-10 September 1983 Reidel, Dordrecht; 1985:141-151.

4. Georgiev AA: Consistent nonparametric multiple regression: the fixed design case. J. Multivar. Anal. 1988,25(1):100-110. 10.1016/0047-259X(88)90155-8

5. Müller HG: Weak and universal consistency of moving weighted averages. Period. Math. Hung. 1987,18(3):241-250. 10.1007/BF01848087

6. Roussas GG, Tran LT, Ioannides DA: Fixed design regression for time series: asymptotic normality. J. Multivar. Anal. 1992,40(2):262-291. 10.1016/0047-259X(92)90026-C

7. Tran L, Roussas G, Yakowitz S, Van BT: Fixed-design regression for linear time series. Ann. Stat. 1996,24(3):975-991.

8. Hu SH, Pan GM, Gao QB: Estimation problems for a regression model with linear process errors. Appl. Math. J. Chin. Univ. Ser. A 2003,18(1):81-90.

9. Liang HY, Jing BY: Asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences. J. Multivar. Anal. 2005,95(2):227-245. 10.1016/j.jmva.2004.06.004

10. Yang WZ, Wang XJ, Wang XH, Hu SH: The consistency for estimator of nonparametric regression model based on NOD errors. J. Inequal. Appl. 2012., 2012: Article ID 140

11. Lehmann EL: Some concepts of dependence. Ann. Math. Stat. 1966,43(3):1137-1153.

12. Matula P: A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 1992,15(3):209-213. 10.1016/0167-7152(92)90191-7

13. Huang HW, Wang DC, Wu QY, Zhang QX: A note on the complete convergence for sequences of pairwise NQD random variables. J. Inequal. Appl. 2011., 2011: Article ID 92

14. Sung SH: Convergence in r -mean of weighted sums of NQD random variables. Appl. Math. Lett. 2013,26(1):18-24. 10.1016/j.aml.2011.12.030

15. Shi JH: On the strong law of large numbers for pairwise NQD random variables with different distributions. Acta Math. Appl. Sin. 2011,34(1):122-130.

16. Wang YB, Su C, Liu XG: On some limit properties for pairwise NQD sequences. Acta Math. Appl. Sin. 1998,21(3):404-414.

17. Li R, Yang WG: Strong convergence of pairwise NQD random sequences. J. Math. Anal. Appl. 2008,344(2):741-747. 10.1016/j.jmaa.2008.02.053

18. Wu QY: Convergence properties of pairwise NQD sequences. Acta Math. Sin. 2002,45(3):617-624.

## Acknowledgements

The work was partially supported by National Natural Science Foundation of China (NSFC) (No. 71271128, No. 11301473), the State Key Program of National Natural Science Foundation of China (71331006), Science Fund for Creative Research Groups, China (11021161), NCMIS, Graduate Innovation Foundation of Shanghai University of Finance and Economics, China (CXJJ2012-423, CXJJ2013-451), and Natural Science Foundation of Fujian Province, China (2012J01028). The authors wish to express their heartfelt thanks to two anonymous referees for their careful reading of the manuscript and helpful suggestions.

## Author information

Authors

### Corresponding author

Correspondence to Jianhua Shi.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Shi, J., Chen, X. & Zhou, Y. The consistency of estimator under fixed design regression model with NQD errors. J Inequal Appl 2014, 92 (2014). https://doi.org/10.1186/1029-242X-2014-92 