In order to prove the main results, we first present several lemmas.
Lemma 4.1
Suppose that (A4) holds. We have:

(i)
\(E_{0}(t,s)\leq c_{k}/(1+ts)^{k}\)and \(E_{k}(t,s)\leq 2^{k}c_{k}/(1+2^{k}ts)^{k}\), where k is a positive integer and \(c_{k}\)is a constant depending on k only.

(ii)
\(\sup_{0\leq t,s \leq 1}E_{m}(t,s)=O(2^{m})\).

(iii)
\(\sup_{0\leq t\leq 1}\int _{0}^{1}E_{m}(t,s)\,ds\leq c\), where c is a positive constant.

(iv)
\(\int _{0}^{1} E_{m}(t,s)\,ds\rightarrow 1\)uniformly in \(t\in [0,1]\), as \(m\rightarrow \infty \).
The proofs of (i) and (ii) can be found in [16], and (iii) follows from (i); the proof of (iv) can be found in [30].
Lemma 4.2
Suppose that (A4)–(A5) hold and \(h(\cdot )\)satisfies (A2)–(A3). Then
$$ \sup_{0\leq t\leq 1} \Biggl\vert h(t)\sum _{i=1}^{n} h(t_{i}) \int _{A_{i}}E_{m}(t,s)\,ds \Biggr\vert =O \bigl(n^{\gamma }\bigr)+O(\eta _{m}), $$
where
$$ \eta _{m}= \textstyle\begin{cases} (1/2^{m})^{\nu 1/2} & \textit{if } 1/2< \nu < 3/2, \\ \sqrt{m}/2^{m} & \textit{if } \nu =3/2, \\ 1/2^{m} & \textit{if } \nu >3/2. \end{cases} $$
It follows easily from Theorem 3.2 of [16].
Lemma 4.3
Let \(\{V_{i}, i=1,\ldots ,n\}\)be a sequence of independent random variables with mean zero and finite \(2+\delta \)th moments, and \(\{a_{ij}, i,j=1,\ldots ,n\}\)a set of positive numbers such that \(\max_{ij}a_{ij}\leq n^{p_{1}}\)for some \(0\leq p_{1}\leq 1\)and \(\sum_{i=1}^{n}a_{ij}=O(n^{p_{2}})\)for some \(p_{2}\geq \max (0,2/(2+\delta )p_{1})\). Then
$$ \max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}a_{ij}V_{ij} \Biggr\vert =O\bigl(n^{(p_{1}p_{2})/2} \log n\bigr),\quad \textit{a.s.} $$
It can be found in [31].
Lemma 4.4
Let \(\{\lambda _{n}(\theta ),\theta \in \varTheta \}\)be a sequence of random convex functions defined on a convex, open subset Θ of \(\mathbb{R}^{d}\). Suppose \(\lambda (\cdot )\)is a realvalued function on Θ for which \(\lambda _{n}(\theta )\rightarrow \lambda (\theta )\)in probability, for each θ in Θ. Then for each compact subset K of Θ, in probability,
$$ \sup_{\theta \in K} \bigl\vert \lambda _{n}(\theta ) \lambda (\theta ) \bigr\vert \rightarrow 0. $$
See [32].
Below, we give the proof of the main results. The proof of Theorem 3.1 uses the idea of [32] and the convex lemma (Lemma 4.4). To complete the proof of Theorem 3.2, it is enough to check the Lindebergtype condition.
Proof of Theorem 3.1
(i) From (2.1), note that \(\hat{g}(t)=\hat{a}\) and â minimizes
$$ \sum_{i=1}^{n} \vert y_{i}a \vert \int _{A_{i}}E_{m}(t,s)\,ds. $$
Let \(\theta =ag(t)\) and \(\epsilon _{i}^{*}=\epsilon _{i}+[g(t_{i})g(t)]\). Then θ minimizes the function
$$ G_{n}(\theta )=\sum_{i=1}^{n} \bigl\{ \bigl\vert \epsilon _{i}^{*}\theta \bigr\vert  \bigl\vert \epsilon _{i}^{*} \bigr\vert \bigr\} \int _{A_{i}}E_{m}(t,s)\,ds. $$
The idea behind the proof, as in [32], is to approximate \(G_{n}(\theta )\) by a quadratic function whose minima have an explicit expression, and then to show that θ̂ is close enough to those minima that share their asymptotic behavior.
We now set out to approximate \(G_{n}(\theta )\) by a quadratic function of θ. Write
$$ G_{n}(\theta )=W_{n}\theta +R_{n}(\theta ) , $$
where \(W_{n}=\sum_{i=1}^{n}\operatorname{sign}(\epsilon _{i})\int _{A_{i}}E_{m}(t,s)\,ds\), which does not depend on θ, and
$$ R_{n}(\theta )=\sum_{i=1}^{n} \bigl\{ \bigl\vert \epsilon _{i}^{*}\theta \bigr\vert  \bigl\vert \epsilon _{i}^{*} \bigr\vert + \operatorname{sign}(\epsilon _{i})\theta \bigr\} \int _{A_{i}}E_{m}(t,s)\,ds. $$
(4.1)
We have
$$ G_{n}(\theta )=E\bigl(G_{n}(\theta ) \bigr)+W_{n}\theta +\bigl[R_{n}(\theta )E \bigl(R_{n}( \theta )\bigr)\bigr]. $$
(4.2)
From the error assumption (A1), it is ensured that the function \(\Delta (t)=E[\epsilon _{i}t\epsilon _{i}]\) has a unique minimum at zero, and \(\Delta (t)=t^{2}f_{\epsilon }(0)+o(t^{2})\). Therefore, by Lemmas 4.1 and 4.2,
$$\begin{aligned} E\bigl(G_{n}(\theta )\bigr) =&\sum _{i=1}^{n}\bigl\{ f_{\epsilon }(0) \theta ^{2}2f_{\epsilon }(0)\bigl[g(t_{i})g(t) \bigr]\theta \bigr\} \int _{A_{i}}E_{m}(t,s)\,ds+o\bigl( \delta _{n}^{2}\bigr) \\ =&f_{\epsilon }(0)\theta ^{2}2f_{\epsilon }(0)\theta \Biggl\{ \sum_{i=1}^{n}g(t_{i}) \int _{A_{i}}E_{m}(t,s)\,dsg(t) \Biggr\} +o\bigl( \delta _{n}^{2}\bigr) \\ =&f_{\epsilon }(0)\theta ^{2}+O\bigl[\bigl(n^{\gamma }+ \eta _{m}\bigr)\theta \bigr]+o\bigl( \delta _{n}^{2} \bigr), \end{aligned}$$
(4.3)
where \(\delta _{n}=\max \{ (n^{\gamma }+\eta _{m}),\theta  \} \). For (4.1), note that \(\vert \epsilon _{i}^{*}\theta \epsilon _{i}^{*}+\operatorname{sign}( \epsilon _{i})\theta \vert \leq 2\theta I\{\varepsilon _{i} \leq \theta +g(t_{i})g(t)\}\), then we obtain
$$\begin{aligned} ER_{n}^{2}(\theta ) \leq & 4\theta ^{2}\sum _{i=1}^{n}EI\bigl\{ \vert \varepsilon _{i} \vert \leq \vert \theta \vert + \bigl\vert g(t_{i})g(t) \bigr\vert \bigr\} \biggl\{ \int _{A_{i}}E_{m}(t,s)\,ds \biggr\} ^{2} \\ =&8\theta ^{2}f_{\epsilon }(0)\sum _{i=1}^{n} \bigl\vert g(t_{i})g(t) \bigr\vert \biggl\{ \int _{A_{i}}E_{m}(t,s)\,ds \biggr\} ^{2} \bigl(1+o(1)\bigr) \\ =&O \biggl(\frac{2^{m}}{n^{1+\gamma }}\theta ^{2} \biggr). \end{aligned}$$
We get
$$ R_{n}(\theta )E\bigl(R_{n}(\theta ) \bigr)=O_{p} \biggl(\theta \sqrt{ \frac{2^{m}}{n^{1+\gamma }}} \biggr). $$
(4.4)
Let \(a_{n}=O_{p} \{ n^{\gamma }+\eta _{m}+\sqrt{ \frac{2^{m}}{n^{1+\gamma }}} \} \). Combining (4.2)–(4.4), for each fixed θ, we have
$$\begin{aligned} G_{n}(\theta ) =&f_{\epsilon }(0)\theta ^{2}+W_{n} \theta +O\bigl[\bigl(n^{\gamma }+ \eta _{m}\bigr)\theta \bigr]+ O_{p} \biggl(\theta \sqrt{ \frac{2^{m}}{n^{1+\gamma }}} \biggr) \\ =&f_{\epsilon }(0)\theta ^{2}+(W_{n}+a_{n}) \theta , \end{aligned}$$
(4.5)
with \(a_{n}=o_{p}(1)\) uniformly. Note that
$$ W_{n}=\sum_{i=1}^{n} \operatorname{sign}(\epsilon _{i}) \int _{A_{i}}E_{m}(t,s)\,ds. $$
It is easy to see that \(W_{n}\) has a bounded second moment and hence is stochastically bounded. Since the convex function \(G_{n}(\theta )(W_{n}+a_{n})\theta \) converges in probability to the convex function \(f_{\epsilon }(0)\theta ^{2}\), it follows from the convexity lemma, Lemma 4.4, that, for every compact set K,
$$ \sup_{\theta \in K} \bigl\vert G_{n}( \theta )(W_{n}+a_{n})\theta f_{\epsilon }(0) \theta ^{2} \bigr\vert =o_{p}(1). $$
(4.6)
Thus, the quadratic approximation to the convex function \(G_{n}(\theta )\) holds uniformly for θ in any compact set. So, using the convexity assumption again, the minimizer θ̂ of \(G_{n}(\theta )\) converges in probability to the minimizer
$$ \bar{\theta }=\frac{1}{2}f_{\epsilon }^{1}(0) (W_{n}+a_{n}), $$
(4.7)
that is,
$$ P\bigl( \vert \hat{\theta }\bar{\theta } \vert >\delta \bigr)\rightarrow 0. $$
The assertion can be proved by some elementary arguments, which is similar to the proof of Theorem 1 in [32]. Based on (4.6), let \(G_{n}(\theta )=(W_{n}+a_{n})\theta +f_{\epsilon }(0)\theta ^{2}+r_{n}( \theta )\) which can be written as
$$ G_{n}(\theta )=f_{\epsilon }(0)\bigl\{ \vert \theta \bar{\theta } \vert ^{2} \vert \bar{\theta } \vert ^{2}\bigr\} +r_{n}(\theta ), $$
(4.8)
with \(\sup_{\theta \in K}r_{n}(\theta )=o_{p}(1)\). Because θ̄ is stochastically bounded. The compact set K can be chosen to contain a closed ball \(B(n)\) with center θ̄ and radius δ in probability, thereby implying that
$$ \Delta _{n}=\sup_{\theta \in B(n)} \bigl\vert r_{n}(\theta ) \bigr\vert =o_{p}(1). $$
Now consider the behavior of \(G_{n}(\theta )\) outside \(B(n)\). Suppose \(\theta =\bar{\theta }+\beta \mu \), with \(\beta >\delta \) and μ is a unit vector. Define \(\theta ^{*}\) as the boundary point \(B_{n}\) that lies on the line segment from θ̄ to θ, i.e. \(\theta ^{*}=\bar{\theta }+\delta \mu \). Convexity of \(G_{n}(\theta )\), (4.8) and the definition of \(\Delta _{n}\) imply
$$\begin{aligned} \frac{\delta }{\beta }G_{n}(\theta )+\biggl(1 \frac{\delta }{\beta }\biggr)G_{n}( \bar{\theta }) \geq & G_{n}\bigl(\theta ^{*}\bigr) \\ \geq &f_{\epsilon }(0)\delta ^{2}f_{\epsilon }(0) \vert \bar{\theta } \vert ^{2} \Delta _{n} \\ \geq &f_{\epsilon }(0)\delta ^{2}+G_{n}(\bar{ \theta })2\Delta _{n}. \end{aligned}$$
It follows that
$$ \inf_{\theta \bar{\theta }>\delta }G_{n}(\theta )\geq G_{n}( \bar{\theta })+\frac{\beta }{\delta }\bigl[f_{\epsilon }(0)\delta ^{2}2\Delta _{n}\bigr], $$
when \(2\Delta _{n}< f_{\epsilon }(0)\delta ^{2}\), which happens with probability tending to one, the minimum of \(G_{n}(\theta )\) cannot occur at any θ with \(\theta \bar{\theta }>\delta \). This implies that, for any \(\delta >0\) and for large enough n, the minimum of \(G_{n}(\theta )\) must be achieved with \(B(n)\), i.e., \(\hat{\theta }\bar{\theta }\leq \delta \) with probability tending to one. Thus, it completes the proof of (i).
(ii) In the following, we will prove that
$$ W_{n}=O \biggl\{ \biggl(\frac{2^{m}}{n} \biggr)^{1/2}\log n \biggr\} ,\quad \mbox{a.s.} $$
(4.9)
By Lemma 4.1, we have
$$ \max_{i,m} \biggl\vert \int _{A_{i}}E_{m}(t,s)\,ds \biggr\vert =O \bigl(2^{m}/n\bigr)=O\bigl(n^{2p}\bigr) $$
and
$$ \sum_{i=1}^{n} \int _{A_{i}}E_{m}(t,s)\,ds= \int _{0}^{1} E_{m}(t,s)\,ds=O(1)=O \bigl(n^{p_{2}}\bigr), $$
where \(p_{1}=2p\) with \(0\leq p_{1}\leq 1\) and \(p_{1}\geq 2/(2+\delta )\), and \(p_{2}=0\), which can be satisfied by Conditions (A1) and (A7)(ii). By Lemma 4.3, \(W_{n}=O(n^{p}\log n)\). Furthermore, we get (4.9). So, (ii) holds. □
Proof of Theorem 3.2
From Theorem 3.1(i), we have
$$ 2f_{\epsilon }(0)\bigl\{ \hat{g}(t)g(t)\bigr\} =Z_{n}(t)+R_{n}(m;\gamma ,\nu ), $$
(4.10)
where \(Z_{n}(t)=\sum_{i=1}^{n}\operatorname{sign}(\epsilon _{i})\int _{A_{i}}E_{m}(t,s)\,ds\) and \(R_{n}(m;\gamma ,\nu )=O_{p} \{ n^{\gamma }+\eta _{m}+\sqrt{ \frac{2^{m}}{n^{1+\gamma }}} \} \). From (A7)(iii) \(n2^{2mv^{*}}\rightarrow 0\), one gets
$$ \sqrt{n2^{m}}R_{n}(m;\gamma ,\nu )=o_{p}(1). $$
(4.11)
Now, let us verify the asymptotic normality of \(\sqrt{n2^{m}}Z_{n}(t^{(m)})\). First, we calculate the variance of it. By the proofs of Theorem 3.3 and Lemma 6.1 of [16], we have
$$\begin{aligned}& \bigl\vert \operatorname{var} \bigl(\sqrt{n2^{m}}Z_{n} \bigl(t^{(m)}\bigr) \bigr)\kappa (t) \omega _{0}^{2} \bigr\vert \\& \quad = \Biggl\vert n2^{m}\sum _{i=1}^{n} \biggl( \int _{A_{i}}E_{m}\bigl(t^{(m)},s\bigr)\,ds \biggr)^{2}\kappa (t)\omega _{0}^{2} \Biggr\vert \\& \quad \leq \Biggl\vert n2^{m}\sum_{i=1}^{n} \biggl( \int _{A_{i}}E_{m}\bigl(t^{(m)},s\bigr)\,ds \biggr)^{2}2^{m} \int _{0}^{1}E_{m}^{2} \bigl(t^{(m)},s\bigr)\kappa (s)\,ds \Biggr\vert \\& \qquad {} + \biggl\vert 2^{m} \int _{0}^{1}E_{m}^{2} \bigl(t^{(m)},s\bigr)\kappa (s)\,ds\kappa (t) \omega _{0}^{2} \biggr\vert \\& \quad \leq n2^{m} \Biggl\vert \sum_{i=1}^{n} (s_{i}s_{i1})^{2}E_{m}^{2} \bigl(t^{(m)},u_{i}\bigr) \frac{1}{n}(s_{i}s_{i1})E_{m}^{2} \bigl(t^{(m)},v_{i}\bigr)k(v_{i}) \Biggr\vert +o(1) \\& \qquad (\mbox{where }u_{i}\mbox{ and }v_{i}\mbox{ belong to }A_{i}) \\& \quad = n2^{m}O\bigl(n^{1}\bigr)O\bigl(n2^{m} \bigr) \biggl(\rho (n)2^{2m}+ \frac{2^{2m}}{n^{2}}+ \frac{2^{2m}}{n}\frac{2^{m}}{n} \biggr)+o(1) \\& \quad \leq O \bigl(n\rho (n)+2^{m}/n \bigr)=o(1). \end{aligned}$$
So,
$$ \operatorname{var} \bigl(\sqrt{n2^{m}}Z_{n} \bigl(t^{(m)}\bigr) \bigr)=\kappa (t)\omega _{0}^{2}+o(1). $$
(4.12)
To complete the proof, we only need to check the Lindebergtype condition
$$ \max_{1\leq i\leq n} \frac{n2^{m} (\int _{A_{i}}E_{m}(t,s)\,ds )^{2}}{\operatorname{var} (\sqrt{n2^{m}}Z_{n}(t^{(m)}) )} \rightarrow 0. $$
From (4.12) and Lemma 4.1, one sees that the order is \(O(2^{m}/n)\rightarrow 0\). Thus, we complete Theorem 3.2. □