Error analysis for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$l^{q}$\end{document}lq-coefficient regularized moving least-square regression

We consider the moving least-square (MLS) method by the coefficient-based regression framework with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$l^{q}$\end{document}lq-regularizer \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(1\leq q\leq2)$\end{document}(1≤q≤2) and the sample dependent hypothesis spaces. The data dependent characteristic of the new algorithm provides flexibility and adaptivity for MLS. We carry out a rigorous error analysis by using the stepping stone technique in the error decomposition. The concentration technique with the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$l^{2}$\end{document}l2-empirical covering number is also employed in our study to improve the sample error. We derive the satisfactory learning rate that can be arbitrarily close to the best rate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(m^{-1})$\end{document}O(m−1) under more natural and much simpler conditions.


Introduction
The least-square (LS) method is an important global approximate method based on the regular or concentrated data sample points. However, there are still some irregular or scattered samples which are obtained in many practical applications such as engineering and machine learning [1][2][3][4]. They also need to be analyzed to achieve their special usefulness. For example, in geographical contour drawing, it is important to derive a set of contours but the height is available only for some scattered data sample points. Therefore, it is vital to seek a suitable local approximation method to deal with scattered data. The moving least-square (MLS) method was introduced by McLain in [4] to draw a set of contours based on a cluster of scattered data sample points. The central idea of the MLS method consists of two steps: first, one takes an arbitrary fixed point and forms a local approximation formula; second, since the fixed point is arbitrary, therefore, one can let it move over the whole domain. It turns out that MLS method is a useful local approximation tool in various mathematics fields such as approximation theory, data smoothing [5], statistics [6] and numerical analysis [7]. In computer graphics, the MLS method is useful for reconstructing a surface from a set of points. Often it is used to create a 3D surface from a point cloud. Recently, a research effort has been made to study the regression learning algorithm by the MLS method; see [8][9][10][11][12]. It has advantages over classical learning algorithms in the sense that its involved hypothesis space can be very simple such as the space of linear functions or a polynomial space.
We recall the regression learning problem by the MLS method briefly. Functions for learning are defined on a compact subset X (input space) of R n and take values in Y = R (output space). The sampling process is controlled by a unknown Borel probability measure on Z = X × Y . The regression function is given by where ρ(·|x) is the conditional probability measure induced by ρ on Y given x ∈ X. The goal of regression learning is to find a good approximation of the regression function f ρ based on a set of random samples ∈ Z m drawn according to the measure ρ independently and identically.
In [11], Tong and Wu considered the following regularized MLS regression algorithm. The hypothesis space is a reproducing kernel Hilbert space (RKHS) H K induced by a Mercer kernel K , which is a continuous, symmetric, and positive semi-definite function on X × X. The RKHS H K is the completion of the linear span of the set of func- Denote C(X) as the space of continuous functions on X with the norm · ∞ . Since K is continuous in X, H K ⊆ C(X). Let κ := sup t,x∈X |K(x, t)| < ∞. Then, by (1.1), we have We define the approximation f z,λ of f ρ pointwise: where λ = λ(m) > 0 is a regularization parameter, σ = η(m) > 0 is a window width, and : R n × R n → R + is called a MLS weight function which satisfies the conditions as follows: where the constants q > n + 1, c q , c > 0. The scheme (1.3)-(1.4) shows that regularization not only ensures the computational stability but also preserves localization property for the algorithm. In this paper, we study the new regularized version of the MLS regression algorithm. We adopt the coefficientbased l q -regularization and the data dependent hypothesis space. where The data dependence nature of the kernel-based hypothesis space provides flexibility for the learning algorithm such as choosing the l q -norm regularizer of a function expansion involving samples. Compared with the scheme (1.3)-(1.4) in a reproducing kernel Hilbert space, the first advantage of the algorithm (1.9) is the effectivity of computations without any optimization processes. Another advantage is that we can choose the suitable parameter q according to the research interest such as smoothness and sparsity. To study the approximation quality of f z,η , we derive the upper bound of the error f z,ηf ρ ρ X with f (·) ρ X := ( X |f (·)| 2 dρ X ) 1 2 and its convergence rates as m → ∞; see [8-11, 13, 14]. The remainder of this paper is organized as follows. In Sect. 2, we will provide the main result. The error decomposition analysis and the upper bounds of the hypothesis error, the approximation error and the sample error will be given in Sects. 3. In Sect. 4, we will prove the main result. Finally, Sect. 5 concludes the paper with future research lines.

Main result
We firstly formulate some basic notations and assumptions.
Let ρ X be the marginal distribution of ρ on X and L 2 ρ X (X) be the Hilbert space of functions from X to Y square-integrable with respect to ρ X with the norm denoted by · ρ X . The integral operator L K : Since X is compact and K is continuous, L K is a compact operator. Its fractional power operator L r K : where {μ i } are the eigenvalues of the operator L K and {e i } are the corresponding eigenfunctions which form an orthonormal basis of L 2 ρ X (X); see [15]. For r > 0, the function f ρ is said to satisfy the regularity condition of order r provided that We show the following nice feature for the capacity of H K,z when the l 2 -empirical covering number is used; see [16], , the exponent 0 < p < 2 and the constant c p > 0.
We use the projection operator to obtain the faster learning rate under the condition |y| ≤ M and M ≥ 1 almost surely; see [17][18][19].
We assume all the constants are positive and independent of δ, m, λ, η or σ . Now we are in a position to give the learning rates of the algorithm (1.9).
If all the functions f ∈ H K ∪ {f ρ } satisfy the Lipschitz condition on X, that is, for the constant c 0 > 0,

Error analysis
We only present the results of the main propositions in this section. All the proofs will be given in the appendix. To estimate π M (f z,η )f ρ 2 ρ X , we invoke the following proposition, whose proof is completely similar to that of Theorem 3.3 in [11]. where is called the local moving expected risk.
Then we only need to provide the upper bound of the integral in (3.1). So to do this, we give its decomposition by using f z,λ , which plays a stepping stone role between f z,η and the regularization function f λ , while different regularization parameters λ and η are adopted.
Here f λ is given by Proposition 3.2 Let f z,σ ,η,x be defined as in (1.9) and be the local moving empirical risk. Then S(z, λ, η) is known as the sample error. H(z, λ, η) is called the hypothesis error. D(λ) is called the approximation error.
The estimation of the hypothesis error can be conducted analogously to that in [18].

Proposition 3.3 Under the assumptions of Theorem 2.1, we have
For the approximation error, we directly invoke the following result in [20].

Proposition 3.4 Under the assumption L
For the sample error, we decompose it into two parts: We firstly give the upper bound of S 2 (z, λ) by using the Bernstein probability inequality in [14,21].

Proposition 3.5
Under the assumptions of Theorem 2.1, for any 0 < δ < 1, with confidence 1δ/2, Next the estimation for S 1 (z, η) is more difficult in the sense that it involves the complexity of the function space H K,z . Hence we need the uniform concentration inequality from [22].

Proof of the main result
Now we derive the learning rates.
So we choose We complete the proof of Theorem 2.1.

Conclusion and further discussion
We obtain the upper error bound of the algorithm (1.9) for the independent and identical samples with 1 ≤ q ≤ 2. We decomposed the error quantity into the approximation error, the hypothesis error and the sample error and obtained their upper bounds using error analysis techniques developed in learning theory. In some practical applications, we may often encounter the non-i.i.d. sampling processes such as weakly dependent or nonidentical processes; see [13,15,20]. It may be interesting to continue our error analysis for the non-i.i.d. samples.

Appendix 3: Estimates for the sample error
We estimate S 2 (z, λ) by using the lemma in [14,21] below.