- Research
- Open access
- Published:
Error analysis for \(l^{q}\)-coefficient regularized moving least-square regression
Journal of Inequalities and Applications volume 2018, Article number: 262 (2018)
Abstract
We consider the moving least-square (MLS) method by the coefficient-based regression framework with \(l^{q}\)-regularizer \((1\leq q\leq2)\) and the sample dependent hypothesis spaces. The data dependent characteristic of the new algorithm provides flexibility and adaptivity for MLS. We carry out a rigorous error analysis by using the stepping stone technique in the error decomposition. The concentration technique with the \(l^{2}\)-empirical covering number is also employed in our study to improve the sample error. We derive the satisfactory learning rate that can be arbitrarily close to the best rate \(O(m^{-1})\) under more natural and much simpler conditions.
1 Introduction
The least-square (LS) method is an important global approximate method based on the regular or concentrated data sample points. However, there are still some irregular or scattered samples which are obtained in many practical applications such as engineering and machine learning [1–4]. They also need to be analyzed to achieve their special usefulness. For example, in geographical contour drawing, it is important to derive a set of contours but the height is available only for some scattered data sample points. Therefore, it is vital to seek a suitable local approximation method to deal with scattered data. The moving least-square (MLS) method was introduced by McLain in [4] to draw a set of contours based on a cluster of scattered data sample points. The central idea of the MLS method consists of two steps: first, one takes an arbitrary fixed point and forms a local approximation formula; second, since the fixed point is arbitrary, therefore, one can let it move over the whole domain. It turns out that MLS method is a useful local approximation tool in various mathematics fields such as approximation theory, data smoothing [5], statistics [6] and numerical analysis [7]. In computer graphics, the MLS method is useful for reconstructing a surface from a set of points. Often it is used to create a 3D surface from a point cloud. Recently, a research effort has been made to study the regression learning algorithm by the MLS method; see [8–12]. It has advantages over classical learning algorithms in the sense that its involved hypothesis space can be very simple such as the space of linear functions or a polynomial space.
We recall the regression learning problem by the MLS method briefly. Functions for learning are defined on a compact subset X (input space) of \(\mathbb{R}^{n}\) and take values in \(Y=\mathbb{R}\) (output space). The sampling process is controlled by a unknown Borel probability measure on \(Z= X\times Y\). The regression function is given by
where \(\rho(\cdot|x)\) is the conditional probability measure induced by ρ on Y given \(x\in X\). The goal of regression learning is to find a good approximation of the regression function \(f_{\rho}\) based on a set of random samples \(\mathbf{z}=\{z_{i}\}_{i=1}^{m}=\{(x_{i}, y_{i})\}_{i=1}^{m} \in Z^{m}\) drawn according to the measure ρ independently and identically.
In [11], Tong and Wu considered the following regularized MLS regression algorithm. The hypothesis space is a reproducing kernel Hilbert space (RKHS) \(\mathcal{H}_{K}\) induced by a Mercer kernel K, which is a continuous, symmetric, and positive semi-definite function on \(X\times X\). The RKHS \(\mathcal {H}_{K}\) is the completion of the linear span of the set of functions \(\{K_{x} :=K(x,\cdot) : x \in X \}\) with respect to the inner product \(\langle\sum_{i=1}^{n} \alpha_{i} K_{x_{i}}, \sum_{j=1}^{m} \beta_{j} K_{y_{j}} \rangle_{K} := \sum_{i=1}^{n} \sum_{j=1}^{m} \alpha_{i} \beta_{j} K(x_{i} , y_{j})\). The reproducing property in \(\mathcal{H}_{K}\) is
Denote \(C(X)\) as the space of continuous functions on X with the norm \(\|\cdot\|_{\infty}\). Since K is continuous in X, \(\mathcal {H}_{K}\subseteq C(X)\). Let \(\kappa:= \sup_{t, x\in X}|K(x,t)|<\infty \). Then, by (1.1), we have
We define the approximation \(f_{\mathbf{z},\lambda}\) of \(f_{\rho}\) pointwise:
where \(\lambda=\lambda(m)>0\) is a regularization parameter, \(\sigma =\eta(m)>0\) is a window width, and \(\Phi:\mathbb{R}^{n}\times \mathbb{R}^{n}\to\mathbb{R}^{+}\) is called a MLS weight function which satisfies the conditions as follows:
where the constants \(q>n+1\), \(c_{q}\), \(c_{\Phi}>0\).
The scheme (1.3)–(1.4) shows that regularization not only ensures the computational stability but also preserves localization property for the algorithm. In this paper, we study the new regularized version of the MLS regression algorithm. We adopt the coefficient-based \(l^{q}\)-regularization and the data dependent hypothesis space.
where
The data dependence nature of the kernel-based hypothesis space provides flexibility for the learning algorithm such as choosing the \(l^{q}\)-norm regularizer of a function expansion involving samples. Compared with the scheme (1.3)–(1.4) in a reproducing kernel Hilbert space, the first advantage of the algorithm (1.9) is the effectivity of computations without any optimization processes. Another advantage is that we can choose the suitable parameter q according to the research interest such as smoothness and sparsity. To study the approximation quality of \(f_{\mathbf{z},\eta}\), we derive the upper bound of the error \(\| f_{\mathbf{z},\eta}-f_{\rho}\|_{\rho_{X}}\) with \(\|f(\cdot)\|_{\rho_{X}}:=(\int_{X}|f(\cdot)|^{2}d{\rho _{X}})^{\frac{1}{2}}\) and its convergence rates as \(m \to\infty\); see [8–11, 13, 14]. The remainder of this paper is organized as follows. In Sect. 2, we will provide the main result. The error decomposition analysis and the upper bounds of the hypothesis error, the approximation error and the sample error will be given in Sects. 3. In Sect. 4, we will prove the main result. Finally, Sect. 5 concludes the paper with future research lines.
2 Main result
We firstly formulate some basic notations and assumptions.
Let \(\rho_{X}\) be the marginal distribution of ρ on X and \(L_{\rho_{X}} ^{2}(X)\) be the Hilbert space of functions from X to Y square-integrable with respect to \(\rho_{X}\) with the norm denoted by \(\|\cdot\|_{\rho _{X}}\). The integral operator \(L_{K}:L_{\rho_{X}} ^{2}(X)\rightarrow L_{\rho _{X}} ^{2}(X)\) is defined by
Since X is compact and K is continuous, \(L_{K}\) is a compact operator. Its fractional power operator \(L_{K}^{r}:L_{\rho_{X}} ^{2}(X)\rightarrow L_{\rho_{X}} ^{2}(X), r>0\) is defined by
where \(\{\mu_{i}\} \) are the eigenvalues of the operator \(L_{K}\) and \(\{e_{i}\}\) are the corresponding eigenfunctions which form an orthonormal basis of \(L_{\rho_{X}} ^{2}(X)\); see [15]. For \(r>0\), the function \(f_{\rho}\) is said to satisfy the regularity condition of order r provided that \(L_{K}^{-r}f_{\rho}\in L^{2}_{\rho_{X}}\).
We show the following nice feature for the capacity of \(\mathcal {H}_{K, \mathbf{z}}\) when the \(l^{2}\)-empirical covering number is used; see [16],
where \(B_{1}= \{f\in\mathcal{H}_{K, \mathbf{z}}: \|f\|_{K}\leq 1 \}\), the exponent \(0< p<2\) and the constant \(c_{p}>0\).
Definition 2.1
The probability measure \(\rho_{X} \) on X is said to satisfy the condition \(L_{\tau}\) with exponent \(\tau>0\) if
where the constants \(r_{0}>0\), \(c_{\tau}>0\) and \(B(x,r)=\{u\in X: |u-x|\leq r\}\) for \(r>0\).
We use the projection operator to obtain the faster learning rate under the condition \(|y|\leq M\) and \(M\geq1\) almost surely; see [17–19].
Definition 2.2
Fix \(M>0\), the projection operator \(\pi_{M}\) on the space of measurable functions \(f:X\rightarrow\mathbb{R}\) is defined as
We assume all the constants are positive and independent of δ, m, λ, η or σ. Now we are in a position to give the learning rates of the algorithm (1.9).
Theorem 2.1
Suppose \(L_{K}^{-r}f_{\rho}\in L^{2}_{\rho_{X}}\) with \(r>0\), (2.1) with \(0< p<2\) and (2.2) hold. If all the functions \(f\in\mathcal{H}_{K}\cup\{{f_{\rho}}\}\) satisfy the Lipschitz condition on X, that is, for the constant \(c_{0}>0\),
then, for any \(0<\delta<1\), with confidence \(1-\delta\), we have
where
Remark 2.1
When \(p\rightarrow0\) and \(r\geq\frac{1}{2}\), our convergence rate \(m^{-\frac{2q}{(2p+2q+3pq)(1+\tau)}}\) tends to \(m^{-\frac{1}{1+\tau }}\). In [11], the authors have derived the rate \(m^{-\frac{1}{1+\tau}}\). In particular, assuming the unnatural norm condition in [8] holds, we can obtain the faster rate \(m^{\tau\varepsilon-\frac{2q}{2p+2q+3pq}}\) for \(r\geq\frac {1}{2}\), which can be arbitrarily close to \(O(m^{-1})\) as \(\varepsilon \rightarrow0\) and \(p\rightarrow0\).
3 Error analysis
We only present the results of the main propositions in this section. All the proofs will be given in the appendix. To estimate \(\|\pi_{M}(f_{\mathbf{z},\eta})-f_{\rho}\|^{2}_{\rho_{X}}\), we invoke the following proposition, whose proof is completely similar to that of Theorem 3.3 in [11].
Proposition 3.1
If \(\rho_{X}\) satisfies (2.1), and all the functions \(f\in \mathcal{H}_{K}\cup\{{f_{\rho}}\}\) satisfy (2.4), then
where
is called the local moving expected risk.
Then we only need to provide the upper bound of the integral in (3.1). So to do this, we give its decomposition by using \(f_{\mathbf{z},\lambda}\), which plays a stepping stone role between \(f_{\mathbf{z},\eta}\) and the regularization function \(f_{\lambda}\), while different regularization parameters λ and η are adopted. Here \(f_{\lambda}\) is given by
Proposition 3.2
Let \(f_{\mathbf{z},\sigma,\eta,x}\) be defined as in (1.9) and
be the local moving empirical risk. Then
where
\(\mathcal{S}(\mathbf{z},\lambda,\eta)\) is known as the sample error. \(\mathcal{H}(\mathbf{z},\lambda,\eta)\) is called the hypothesis error. \(\mathcal{D}(\lambda)\) is called the approximation error.
The estimation of the hypothesis error can be conducted analogously to that in [18].
Proposition 3.3
Under the assumptions of Theorem 2.1, we have
For the approximation error, we directly invoke the following result in [20].
Proposition 3.4
Under the assumption \(L_{K}^{-r}f_{\rho}\in L^{2}_{\rho_{X}}\) with \(r>0\), we have
For the sample error, we decompose it into two parts:
We firstly give the upper bound of \(\mathcal{S}_{2}(\mathbf {z},\lambda)\) by using the Bernstein probability inequality in [14, 21].
Proposition 3.5
Under the assumptions of Theorem 2.1, for any \(0<\delta <1\), with confidence \(1-\delta/2\),
Next the estimation for \(\mathcal{S}_{1}(\mathbf{z},\eta)\) is more difficult in the sense that it involves the complexity of the function space \(\mathcal{H}_{K, \mathbf{z}}\). Hence we need the uniform concentration inequality from [22].
Proposition 3.6
Under the assumptions of Theorem 2.1, for any \(0<\delta <1\), with confidence \(1-\delta/2\),
where \(R_{\eta}=\kappa m^{1-\frac{1}{q}} (\frac{M^{2}}{\eta } )^{\frac{1}{q}}\).
4 Proof of the main result
Now we derive the learning rates.
Proof of Theorem 2.1
Combining the four bounds of Proposition 3.3, 3.4, 3.5 and 3.6 with (3.5), with confidence \(1-\delta\), we have
By substituting (4.1) into (3.1), we have
When \(0< r<1/2\),
Let \(\lambda=m^{-\theta_{1}}\), \(\eta=m^{-\theta_{2}}\) and \(\sigma =m^{-\theta_{3}}\).
where
To maximize the learning rate, we take
Let
Then
Let
Then
When \(r\geq1/2\),
Similarly, we obtain
So we choose
We complete the proof of Theorem 2.1. □
5 Conclusion and further discussion
We obtain the upper error bound of the algorithm (1.9) for the independent and identical samples with \(1\leq q \leq2\). We decomposed the error quantity into the approximation error, the hypothesis error and the sample error and obtained their upper bounds using error analysis techniques developed in learning theory. In some practical applications, we may often encounter the non-i.i.d. sampling processes such as weakly dependent or non-identical processes; see [13, 15, 20]. It may be interesting to continue our error analysis for the non-i.i.d. samples.
References
Cerny, M., Antoch, J., Hladik, M.: On the possibilistic approach to linear regression models involving uncertain, indeterminate or interval data. Inf. Sci. 244(7), 26–47 (2013)
Fasshauer, G.E.: Toward approximate moving least squares approximation with irregularly spaced centers. Comput. Methods Appl. Mech. Eng. 193(12–14), 1231–1243 (2004)
Komargodski, Z., Levin, D.: Hermite type moving-least-squares approximations. Comput. Math. Appl. 51(8), 1223–1232 (2006)
Mclain, D.H.: Drawing contours from arbitrary data points. Comput. J. 17(17), 318–324 (1974)
Savitzky, A., Golay, M.J.E.: Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem. 36(8), 1627–1639 (1964)
Wand, M.P., Jones, M.C.: Kernel Smoothing, Chapman & Hall, New York (1995)
Shepard, D.: A two-dimensional interpolation function for irregularly-spaced data. In: ACM National Conference, pp. 517–524 (1968)
Wang, H.Y., Xiang, D.H., Zhou, D.X.: Moving least-square method in learning theory. J. Approx. Theory 162(3), 599–614 (2010)
Wang, H.Y.: Concentration estimates for the moving least-square method in learning theory. J. Approx. Theory 163(9), 1125–1133 (2011)
He, F.C., Chen, H., Li, L.Q.: Statistical analysis of the moving least-squares method with unbounded sampling. Inf. Sci. 268(1), 370–380 (2014)
Tong, H.Z., Wu, Q.: Learning performance of regularized moving least square regression. J. Comput. Appl. Math. 325, 42–55 (2017)
Guo, Q., Ye, P.: Error analysis of the moving least-squares method with non-identical sampling. Int. J. Comput. Math. 1–15 (2018, in press). https://doi.org/10.1080/00207160.2018.1469748
Guo, Q., Ye, P.X.: Coefficient-based regularized regression with dependent and unbounded sampling. Int. J. Wavelets Multiresolut. Inf. Process. 14(5), 1–14 (2016)
Wu, Q., Ying, Y.M., Zhou, D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6(2), 171–192 (2006)
Pan, Z.W., Xiao, Q.W.: Least-square regularized regression with non-iid sampling. J. Stat. Plan. Inference 139(10), 3579–3587 (2009)
Shi, L., Feng, Y.L., Zhou, D.X.: Concentration estimates for learning with \(l^{1}\)-regularizer and data dependent hypothesis spaces. Appl. Comput. Harmon. Anal. 31(2), 286–302 (2011)
Lv, S.G., Shi, D.M., Xiao, Q.W., Zhang, M.S.: Sharp learning rates of coefficient-based \(l^{q}\)-regularized regression with indefinite kernels. Sci. China Math. 56(8), 1557–1574 (2013)
Feng, Y.L., Lv, S.G.: Unified approach to coefficient-based regularized regression. Comput. Math. Appl. 62(1), 506–515 (2011)
Nie, W.L., Wang, C.: Constructive analysis for coefficient regularization regression algorithms. J. Math. Anal. Appl. 431(2), 1153–1171 (2015)
Guo, Q., Ye, P., Cai, B.: Convergence rate for \(l^{q}\)-coefficient regularized regression with non-i.i.d. sampling. IEEE Access 6, 18804–18813 (2018)
Cucker, F., Zhou, D.X.: Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, Cambridge (2007)
Wu, Q., Ying, Y., Zhou, D.X.: Multi-kernel regularized classifiers. J. Complex. 23(1), 108–134 (2007)
Funding
This work was supported by the Natural Science Foundation of China (grant nos. 11271199, 11671213).
Author information
Authors and Affiliations
Contributions
All authors conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Error decomposition
Proof of Proposition 3.2
We have
Moreover, by (1.5),
This completes the proof of Proposition 3.2. □
Appendix 2: Estimates for the hypothesis error
Proof of Proposition 3.3
It is known from Theorem 2.1 in [11] that the coefficient \({\mathbf{a}}^{\mathbf{z}}=(a_{1}^{\mathbf{z}},\ldots ,a_{m}^{\mathbf{z}})^{T}\) of \(f_{\mathbf{z}, \sigma,\lambda,x}\) satisfies
where I is the identity matrix, \({\mathbf{y}}=(y_{1}, y_{2}, y_{3},\ldots, y_{m})^{T}\), \(K[{\mathbf {x}}]=(K(x_{i},x_{j}))_{i,j=1}^{m}\) and \(Q_{x}=\operatorname{diag} (\Phi (\frac{x}{\sigma},\frac{x_{i}}{\sigma} ):i=1,2,\ldots,m )\).
This implies
Therefore, for \(i=1,2,\ldots,m\), we get
By the Hölder inequality, we have
By (1.5), we have
Thus,
Since
we get
It follows from (1.3)–(1.4) that
Combining (B.1) and (B.2), we obtain our desired estimation. □
Appendix 3: Estimates for the sample error
We estimate \(\mathcal{S}_{2}(\mathbf{z},\lambda)\) by using the lemma in [14, 21] below.
Lemma C.1
Let \(\{\xi(z_{i})\}_{i=1}^{m}\) be independent random variables on a probability space Z with mean \(\mathbb{E}(\xi)\) and variance \(\sigma^{2}(\xi)\). Assume \(|\xi({z})-\mathbb{E}\xi|\leq M\) almost surely. Then, for any \(0<\delta<1\), with confidence \(1-\delta\), we have
Proof of Proposition 3.5
Let \(g(u,y)=\int_{X}\Phi (\frac{x}{\sigma},\frac{u}{\sigma } )[(y-f_{\lambda}(u))^{2}-(y-f_{\rho}(u))^{2}]\,d\rho_{X}(x)\), for any \(z=(u,y)\in Z\). Then
Combining with (1.5), we have
Therefore,
and
By Lemma C.1, with confidence \(1-\delta/2\), we have
This completes the proof of Proposition 3.5. □
We estimate \(\mathcal{S}_{1}(\mathbf{z},\eta)\) by using the proposition from [22] below.
Proposition C.1
Let \(\mathcal{F}\) be a class of bounded measurable functions. Assume that there are constants \(Q, \tau>0\) and \(\alpha\in[0,1]\) such that \(\|f\|_{\infty}\leq Q\) and \(\mathbb{E}f^{2}\leq\tau\mathbb {E}f^{\alpha}\) for every \(f\in\mathcal{F}\). If for some \(a>0\) and \(0< p<2\),
then there exists a constant \(c_{p}'\) depending only on p such that, for any \(t>0\), with probability at least \(1-e^{-t}\), we have
where
Proof of Proposition 3.6
Consider the set of functions
By (1.5),
It follows from the Schwarz inequality that
Hence,
It has been proved in [8] that
which implies
Then, for any \(g_{1}\), \(g_{2}\in\mathcal{G}_{R}\), we get
which implies
Thus from the capacity condition (2.1), we have
Now we can apply Proposition C.1 to \(\mathscr{G}\) with \(Q=8M^{2}\), \(\alpha=1\), \(\tau=16M^{2}\) and \(a=c_{p}(4M)^{p}R^{p}\). Thus for any \(0<\delta<1\), with confidence \(1-\delta/2\), we have
where \(C_{p,M}=c_{p}'(4M)^{\frac{4}{2+p}}c_{p}^{\frac{2}{2+p}}\).
Moreover, we take \(f=f_{\mathbf{z},\sigma,\eta,x}\) and derive the following bound of \(f_{\mathbf{z},\sigma,\eta,x}\) by using the same method as in Lemma 3 of [18] and (1.5):
Finally, we complete the proof of Proposition 3.6 by taking \(R=R_{\eta}=\kappa m^{1-\frac {1}{q}} (\frac{M^{2}}{\eta} )^{\frac{1}{q}}\). □
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Guo, Q., Ye, P. Error analysis for \(l^{q}\)-coefficient regularized moving least-square regression. J Inequal Appl 2018, 262 (2018). https://doi.org/10.1186/s13660-018-1856-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1856-y