Skip to content


  • Research
  • Open Access

Adaptive group bridge estimation for high-dimensional partially linear models

Journal of Inequalities and Applications20172017:158

  • Received: 5 January 2017
  • Accepted: 16 June 2017
  • Published:


This paper studies group selection for the partially linear model with a diverging number of parameters. We propose an adaptive group bridge method and study the consistency, convergence rate and asymptotic distribution of the global adaptive group bridge estimator under regularity conditions. Simulation studies and a real example show the finite sample performance of our method.


  • adaptive group bridge
  • high dimension
  • partially linear model


  • 62E20
  • 62J07
  • 62F12

1 Introduction

Consider the following model:
$$ Y=\mathbf {x}^{T}\boldsymbol {\beta }+f(U)+\varepsilon, $$
where \(\mathbf {x}=(\boldsymbol{x}_{1}^{T},\boldsymbol{x}_{2}^{T},\ldots,\boldsymbol{x}_{p_{n}}^{T})^{T}\) is a covariate vector with \(\boldsymbol{x}_{j}=(X_{jk},k=1,\ldots,d_{j})^{T}\) being a \(d_{j}\times1\) vector corresponding to the jth group in the linear part, \(\boldsymbol {\beta }=(\boldsymbol {\beta }_{j}^{T},j=1,\ldots,p_{n})^{T}\) with \(\boldsymbol {\beta }_{j}\) being the \(d_{j}\times1\) vector of regression coefficients, f is an unknown function of U, and ε is the random error with mean zero. Without loss of generality, U is scaled to \([0, 1]\). Furthermore, \((\mathbf {x},U)\) and ε are independent.

Variable selection for high-dimensional data is a hot and important issue. Penalized regression methods have been widely used in the literature such as [15], and so on. Among these methods, bridge regression including lasso and ridge as two well-known special cases has been studied by many authors (e.g., [610]). [11] studied adaptive bridge estimation for high-dimensional linear models. In addition, group structure of variables arise always in many contemporary statistical modeling problems. [12] proposed a group bridge method which not only effectively removes unimportant groups, but also maintains the flexibility of selecting variables within identified groups. [13] investigated an adaptive choice of the penalty order in group bridge regression.

The aforementioned model (1) is just the partially linear model that originated from [14]. The partially linear model is a common semiparametric model enjoying the interpretability and flexibility. Our contributions in this paper include: (1) we propose an adaptive group bridge method to achieve the group selection for a high-dimensional partially linear model; (2) we consider the choice of index γ in the adaptive group bridge and use leave-one-observation-out cross-validation (CV) to implement this choice. It can significantly reduce the computational burden; (3) we give the consistency, convergence rate and asymptotic distribution of the adaptive group bridge estimator which is the global minimizer of the objective function.

The rest of the article is organized as follows. Section 2 gives the adaptive group bridge method. In Section 3, we show the assumptions and asymptotic results for the global adaptive group bridge estimator. Section 4 shows computational algorithm and selection of tuning parameters. Simulation studies and real data are presented in Section 5. Section 6 gives a short discussion. Technical proofs are relegated to Appendix.

2 Adaptive group bridge in the partially linear model

Suppose that we have a collection of independent observations \(\{(\mathbf {x}_{i},U_{i},Y_{i}), 1\leq i \leq n \}\) from model (1). That is,
$$ Y_{i} = \mathbf {x}^{T}_{i}\boldsymbol {\beta }+ f(U_{i}) + \varepsilon_{i},\quad i = 1, \ldots, n, $$
where \(\varepsilon_{1}, \ldots, \varepsilon_{n}\) are i.i.d. random errors with mean zero and finite variance \(\sigma^{2}<\infty\).
To obtain an estimate of function \(f(\cdot)\), we employ a B-spline basis. Denote \(\mathcal{S}_{n}\) as the space of polynomial splines of degree \(m\geq 1\). Let \(\{B_{k}(u), 1\leq k\leq q_{n}\}\) be a normalized B-spline basis with \(\|B_{k}\|_{\infty}\leq1\), where \(\|\cdot\|_{\infty}\) is the sup norm. Then, for any \(f_{n}\in\mathcal{S}_{n}\), we have
$$f_{n}(u)=\sum_{j=1}^{q_{n}}B_{j}(u) \alpha_{j}\triangleq \mathbf {B}(u)^{T}\boldsymbol {\alpha }. $$
Under some smoothness conditions, the nonparametric function f can well be approximated by functions in \(\mathcal{S}_{n}\).
Consider the following adaptive group bridge penalized objective function:
$$ \sum_{i=1}^{n} \bigl(Y_{i}-\mathbf {x}_{i}^{T}\boldsymbol {\beta }- \mathbf {B}(U_{i})^{T} \boldsymbol {\alpha }\bigr)^{2}+ \sum_{j=1}^{p_{n}} \lambda_{j}\|\boldsymbol {\beta }_{j}\|^{\gamma}, $$
where \(\lambda_{j}\), \(j=1,\ldots,p_{n}\), are the tuning parameters, and \(\| \cdot\|\) denotes the \(L_{2}\) norm on the Euclidean space. Let \(\mathbf {Y}=(Y_{1},\ldots,Y_{n})^{T}\), \(\mathbb {X}=(X_{ijk},1\leq i\leq n,1\leq j\leq p_{n}, 1\leq k\leq d_{j})=(\mathbf {x}_{1},\ldots, \mathbf {x}_{n})^{T}\) and \(\mathbf {Z}=(\mathbf {B}(U_{1}),\ldots, \mathbf {B}(U_{n}))^{T}\). Then (3) can be changed into
$$ L_{n}(\boldsymbol {\beta },\boldsymbol {\alpha })=\|\mathbf {Y}-\mathbb {X}\boldsymbol {\beta }-\mathbf {Z}\boldsymbol {\alpha }\|^{2}+ \sum_{j=1}^{p_{n}}\lambda_{j}\| \boldsymbol {\beta }_{j}\|^{\gamma}. $$
For some β, the optimal α minimizing \(L_{n}(\cdot)\) meets the partial differential equation
$$\partial L_{n}(\boldsymbol {\beta },\boldsymbol {\alpha })/\partial \boldsymbol {\alpha }=0, $$
$$\mathbf {Z}^{T}\mathbf {Z}\boldsymbol {\alpha }=\mathbf {Z}^{T}(\mathbf {Y}-\mathbb {X}\boldsymbol {\beta }). $$
Let \(H=\mathbf {Z}(\mathbf {Z}^{T}\mathbf {Z})^{-1}\mathbf {Z}^{T}\), note that H is a projection matrix. We can rewrite the expression (4) as follows:
$$ Q_{n}(\boldsymbol {\beta })= \bigl\Vert (I-H) (\mathbf {Y}-\mathbb {X}\boldsymbol {\beta }) \bigr\Vert ^{2} + \sum_{j=1}^{p_{n}} \lambda_{j} \Vert \boldsymbol {\beta }_{j} \Vert ^{\gamma}. $$
For some fixed \(\gamma>0\), define \(\hat {\boldsymbol {\beta }}=\arg\min Q_{n}(\boldsymbol {\beta })\), then \(\hat {\boldsymbol {\beta }}\) is called the adaptive group bridge estimator. If \(\hat {\boldsymbol {\beta }}\) is obtained, then the estimator \(\hat {\boldsymbol {\alpha }}\) can be achieved. Thus we can get the estimator of the nonparametric part, namely, \(\hat{f}_{n}(u)=\mathbf {B}(u)^{T}\hat {\boldsymbol {\alpha }}\).

3 Asymptotic properties

In this section, we show the oracle property of the parametric part. For convenience of the statement, we first give some notations. Define \(\mathbf{g}(u)=E(\mathbf {x}|U=u)\) and \(\tilde {\mathbf {x}}=\mathbf {x}-E(\mathbf {x}|U)\). Let \(\Sigma(u)\) be the conditional covariance matrix of \(\tilde {\mathbf {x}}\), i.e., \(\Sigma(u)=\operatorname{cov}(\tilde {\mathbf {x}}|U=u)\). Denote Ω as the unconditional covariance matrix of \(\tilde {\mathbf {x}}\), i.e., \(\Omega=E[\Sigma(U)]\). The corresponding sample version is \(\mathbf {G}=(\mathbf{g}(U_{1}),\ldots,\mathbf{g}(U_{n}))^{T}\) with \(\mathbf {g}(U_{i})=E(\mathbf {x}_{i}|U_{i})\) and \(\widetilde {\mathbb {X}}=(\tilde {\mathbf {x}}_{1},\ldots, \tilde {\mathbf {x}}_{n})^{T}\) with \(\tilde {\mathbf {x}}_{i}=\mathbf {x}_{i}-E(\mathbf {x}_{i}|U_{i})\).

Let the true parameter be \(\boldsymbol {\beta }_{0}=(\boldsymbol {\beta }_{01}^{T},\ldots, \boldsymbol {\beta }_{0p_{n}}^{T})^{T}\triangleq(\boldsymbol {\beta }_{10}^{T},\boldsymbol {\beta }_{20}^{T})^{T}\). Let \(\mathcal {A}= \{ 1\leq j \leq p_{n}: \|\boldsymbol {\beta }_{0j}\| \neq0\}\) be the index set of the nonzero groups. Without loss of generality, we assume that coefficients of the first \(k_{n}\) group are nonzero, i.e., \(\mathcal {A}=\{1,2,\ldots,k_{n}\}\). Let \(|\mathcal {A}| = k_{n}\) be the cardinality of the set \(\mathcal {A}\), which is allowed to increase with n. For \(j\notin \mathcal {A}\), \(\|\boldsymbol {\beta }_{0j}\|=0\). Define \(\boldsymbol {\beta }_{10}=(\boldsymbol {\beta }_{0j}^{T},j\in \mathcal {A})^{T}\), \(\boldsymbol {\beta }_{20}=(\boldsymbol {\beta }_{0j}^{T},j\notin \mathcal {A})^{T}\). Let \(d^{*}=\max_{1\leq j \leq p_{n}}d_{j}\), \(\varphi_{n1}=\max\{\lambda_{j},j\in \mathcal {A}\}\) and \(\varphi_{n2}=\min\{ \lambda_{j},j\notin \mathcal {A}\}\).

Corresponding to the partition of \(\boldsymbol {\beta }_{0}\), denote \(\hat {\boldsymbol {\beta }}=(\hat {\boldsymbol {\beta }}_{(1)}^{T}, \hat {\boldsymbol {\beta }}_{(2)}^{T})^{T}\) and decompose
$$\mathbb {X}=(\mathbb {X}_{1} \mathbb {X}_{2}),\qquad \mathbf {G}=(\mathbf {G}_{1} \mathbf {G}_{2}),\qquad \widetilde {\mathbb {X}}=(\widetilde {\mathbb {X}}_{1} \widetilde {\mathbb {X}}_{2}), \qquad \Omega= \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}}\Omega_{11} & \Omega_{12} \\ \Omega_{21} & \Omega_{22} \end{array}\displaystyle \right ). $$
The following conditions are required for the B-spline approximation of function f.
  1. (C1)

    The distribution of U is absolutely continuous, and its density is bounded away from 0 and ∞.

  2. (C2)
    (Hölder conditions of \(f(\cdot)\) and \(g_{j}(\cdot)\), where \(g_{j}\) is the jth component of g) Let l, δ and M be real constants such that \(0<\delta\leq1\) and \(M>0\). \(f(\cdot)\) and \(g_{j}(\cdot)\) belong to a class of functions \(\mathcal{H}\),
    $$ \mathcal{H}=\bigl\{ h: \bigl\vert h^{(l)}(u_{1})-h^{(l)}(u_{2}) \bigr\vert \leq M \vert u_{1}-u_{2} \vert ^{\delta }, \text{for } 0\leq u_{1}, u_{2} \leq1\bigr\} , $$
    where \(0< l\leq m-1\) and \(r=l+\delta\).
The following part lists all the reasonable conditions which are necessary to attain the asymptotic results.
  1. (A1)
    Let \(\lambda_{\max}(\Omega)\) and \(\lambda_{\min }(\Omega)\) be the largest and smallest eigenvalue of Ω, respectively. There exist constants \(\tau_{1}\) and \(\tau_{2}\) such that
    $$0< \tau_{1}\leq\lambda_{\min}(\Omega)\leq\lambda_{\max}( \Omega )\leq\tau_{2}< \infty. $$
  2. (A2)
    There exist constants \(0< b_{0}< b_{1}<\infty\) such that
    $$b_{0}\leq\min\bigl\{ \Vert \boldsymbol {\beta }_{0j} \Vert , 1\leq j \leq k_{n}\bigr\} \leq\max\bigl\{ \Vert \boldsymbol {\beta }_{0j} \Vert , 1\leq j\leq k_{n}\bigr\} \leq b_{1}. $$
  3. (A3)

    \(\|n^{-1}\mathbb {X}^{T}(I-H)\mathbb {X}-\Omega\|\stackrel{P}{\rightarrow}0\); \(E[\operatorname{tr}(\mathbb {X}^{T}(I-H)\mathbb {X})]=O(np_{n})\).

  4. (A4)

    \(d^{*}=O(1)\), \(p_{n}^{2}/n\rightarrow0\) and \(n^{-1}\varphi _{n1}k_{n}\rightarrow0\).

  5. (A5)

    (a) \(\varphi_{n1}k_{n}^{1/2}/(\sqrt{np_{n}}+n\sqrt {p_{n}}q_{n}^{-r})\rightarrow0\); (b) \(\varphi_{n2}(\sqrt{n^{-1}p_{n}}+\sqrt {p_{n}}q_{n}^{-r})^{\gamma-2}/n\rightarrow\infty\).

  6. (A6)

    For every \(1\leq j \leq p_{n}\) and \(1\leq k\leq d_{j}\), \(E[X_{1jk}-E(X_{1jk}|U_{1})]^{4}\) is bounded. Furthermore, \(E(\varepsilon ^{4})\) is bounded.


Conditions (A1) and (A2) are commonly used. Condition (A3) holds under some conditions. The proof can be found in Lemmas 1 and 2 in [15]. Condition (A4) is used to obtain the consistency of the estimator. Condition (A5) is needed in the proof of convergence rate. Condition (A6) is necessary to attain the asymptotic distribution.

Theorem 3.1


Suppose that \(\gamma>0\) and conditions (A1)-(A4) hold, then
$$\Vert \hat {\boldsymbol {\beta }}-\boldsymbol {\beta }_{0} \Vert ^{2}=O_{P} \bigl(n^{-1}d^{*}p_{n}+q_{n}^{-2r}+n^{-1} \varphi_{n1}k_{n}\bigr), $$
namely, \(\|\hat {\boldsymbol {\beta }}-\boldsymbol {\beta }_{0}\|\stackrel{P}{\rightarrow}0\).

Theorem 3.1 implies that under some conditions the estimators converge to the true values of parameters.

Theorem 3.2

Convergence rate

Suppose that conditions (A1)-(A5) hold, then
$$\Vert \hat {\boldsymbol {\beta }}-\boldsymbol {\beta }_{0} \Vert =O_{P}\bigl( \sqrt{n^{-1}p_{n}}+\sqrt{p_{n}}q_{n}^{-r} \bigr). $$

This theorem shows that the adaptive group bridge can give the optimal convergence rate with \(p_{n}\rightarrow\infty\).

Theorem 3.3

Oracle property

Suppose that \(0<\gamma<1\), \(n^{-1}k_{n}q_{n}\rightarrow0\) and \(nq_{n}^{-2r}\rightarrow0\). If conditions (A1)-(A6) are satisfied, then we have
  1. (i)

    \(\Pr(\hat {\boldsymbol {\beta }}_{(2)}={\mathbf{0}})\rightarrow1\), \(n\rightarrow\infty\);

  2. (ii)
    Let \(u_{n}^{2}=n^{2}\boldsymbol {\omega }_{n}^{T}(\mathbb {X}^{T}_{1}(I-H)\mathbb {X}_{1})^{-1}\Omega _{11}(\mathbb {X}^{T}_{1}(I-H)\mathbb {X}_{1})^{-1}\boldsymbol {\omega }_{n}\) with \(\boldsymbol {\omega }_{n}\) being some \(\sum_{j=1}^{k_{n}}d_{j}\)-vector with \(\|\boldsymbol {\omega }_{n}\|^{2}=1\), then
    $$n^{1/2}u_{n}^{-1}\boldsymbol {\omega }_{n}^{T}( \hat {\boldsymbol {\beta }}_{(1)}-\boldsymbol {\beta }_{10})\stackrel{D}{\rightarrow }N\bigl(0, \sigma^{2}\bigr). $$

This theorem states that the adaptive group bridge performs as well as the oracle [16].

4 Computational algorithm and selection of tuning parameters

4.1 Computational algorithm

In this section, we apply the LQA algorithm proposed by [3] to compute the adaptive group bridge estimate.

We take the initial value \({\boldsymbol {\beta }}^{(0)}\). Here the ordinary least square estimate is chosen as the initial value \(\boldsymbol {\beta }^{(0)}\). The penalty term \(p_{\lambda_{j}}(\|{\boldsymbol {\beta }}_{j}\|)=\lambda_{j}\| \boldsymbol {\beta }_{j}\|^{\gamma}\) can be approximated as
$$p_{\lambda_{j}}\bigl( \Vert {\boldsymbol {\beta }}_{j} \Vert \bigr)\approx p_{\lambda_{j}}\bigl( \bigl\Vert {\boldsymbol {\beta }}_{j}^{(0)} \bigr\Vert \bigr)+\frac{1}{2}\bigl\{ p_{\lambda_{j}}'\bigl( \bigl\Vert {\boldsymbol {\beta }}_{j}^{(0)} \bigr\Vert \bigr)/ \bigl\Vert { \boldsymbol {\beta }}_{j}^{(0)} \bigr\Vert \bigr\} \bigl( \Vert { \boldsymbol {\beta }}_{j} \Vert ^{2}- \bigl\Vert {\boldsymbol {\beta }}_{j}^{(0)} \bigr\Vert ^{2}\bigr), $$
when \(\|{\boldsymbol {\beta }}_{j}^{(0)}\|>0\). The following iterative expression of β can be obtained:
$$\begin{aligned} \boldsymbol {\beta }^{(1)}=\bigl[\mathbb {X}^{T}(I-H)\mathbb {X}+n \Sigma_{\lambda,\gamma}\bigl({\boldsymbol {\beta }}^{(0)}\bigr)\bigr]^{-1} \mathbb {X}^{T}(I-H)\mathbf {Y}, \end{aligned}$$
$$\begin{aligned} \Sigma_{\lambda,\gamma}\bigl({\boldsymbol {\beta }}^{(0)}\bigr)=\operatorname{diag} \biggl\{ \frac {p_{\lambda_{j}}'(\|{\boldsymbol {\beta }}^{(0)}_{j}\|)}{\|{\boldsymbol {\beta }}^{(0)}_{j}\| }I_{d_{j}},j=1,\ldots,p_{n} \biggr\} , \end{aligned}$$
with \(I_{d_{j}}\) being a \(d_{j}\times d_{j}\) unit matrix. If some \(\|\boldsymbol {\beta }^{(1)}_{j}\|\) is smaller than 10−3, then we set \(\boldsymbol {\beta }^{(1)}_{j}={\mathbf{0}}\). The finial estimate can be obtained iteratively by formula (6) until the convergence is achieved.

4.2 Selection of the tuning parameters

For our method, \(q_{n}\), γ, and \(\lambda_{j}\) (\(j=1,\ldots,p_{n}\)) should be chosen. For convenience, cubic spline basis (\(m=4\)) is used. We set \(q_{n}=7\). Simulation results demonstrate that this choice performs quite well. There are also many tuning parameters that should be chosen. In fact, we only need to select one tuning parameter by setting \(\lambda_{j}=\lambda/\|\boldsymbol {\beta }_{j}^{(0)}\|\). We use ‘leave-one-observation-out’ cross-validation (CV) to select λ and γ. Due to the convergence of the algorithm, we have
$$\hat {\boldsymbol {\beta }}=\bigl[\mathbb {X}^{T}(I-H)\mathbb {X}+n\Sigma_{\lambda,\gamma}({\hat {\boldsymbol {\beta }}}) \bigr]^{-1}\mathbb {X}^{T}(I-H)\mathbf {Y}, $$
where \(\hat {\boldsymbol {\beta }}\) is obtained based on the whole data set. Note that it is the solution of the ridge regression
$$ \bigl\Vert \mathbf {Y}^{*}-\mathbb {X}^{*}\boldsymbol {\beta }\bigr\Vert ^{2}+n \boldsymbol {\beta }^{T}\Sigma_{\lambda,\gamma}({\hat {\boldsymbol {\beta }}})\boldsymbol {\beta }, $$
where \(\mathbf {Y}^{*}=(I-H)\mathbf {Y}\) and \(\mathbb {X}^{*}=(I-H)\mathbb {X}\). Let \(\mathbf {Y}^{*}=(y_{1}^{*},\ldots,y_{n}^{*})^{T}\) and \(\mathbb {X}^{*}=(\mathbf {x}_{1}^{*},\ldots, \mathbf {x}_{n}^{*})^{T}\). The CV error is
$$CV(\lambda,\gamma)=\frac{1}{n}\sum_{i=1}^{n} \bigl(y_{i}^{*}-\mathbf {x}_{i}^{*T}\hat {\boldsymbol {\beta }}^{-i} \bigr)^{2}, $$
where \(\hat {\boldsymbol {\beta }}^{-i}\) is achieved by solving (7) without the ith observation. The computation of the CV error is intensive, so we will use the following formula, which can be proved similar to [17]:
$$CV(\lambda,\gamma)=\frac{1}{n}\sum_{i=1}^{n} \bigl(y_{i}^{*}-\mathbf {x}_{i}^{*T}\hat {\boldsymbol {\beta }}\bigr)^{2}/(1-D_{ii}), $$
where \(D_{ii}\) is the \((i,i)\)th diagonal element of \((I-H)\mathbb {X}[\mathbb {X}^{T}(I-H)\mathbb {X}+n\Sigma_{\lambda,\gamma}({\hat {\boldsymbol {\beta }}})]^{-1}\mathbb {X}^{T}(I-H)\). It is obvious that this method can significantly reduce the computational burden.

5 Simulation studies and application

In this section, we investigate the finite sample performance of the adaptive group bridge method through simulations and a real data application.

5.1 Monte Carlo simulations

We simulate 100 datasets consisting of n observations from the following partially linear model:
$$ Y_{i}=\sum_{j=1}^{p_{n}} \boldsymbol{x}_{ij}^{T}\boldsymbol {\beta }_{j}+\cos(2\pi U_{i})+\varepsilon _{i},\quad i=1,\ldots,n, $$
where \(n=500\), and the error \(\varepsilon_{i}\sim N(0,\sigma^{2})\) with \(\sigma=0.5,1,4\). We consider that there are \(p_{n}\) groups with \(p_{n}=10,30,50\) and each group consists of three variables. The true values of parameters \(\boldsymbol {\beta }_{1}^{T}=(0.5,1,1.5)\), \(\boldsymbol {\beta }_{2}^{T}=(1,-1,1)\), \(\boldsymbol {\beta }_{3}^{T}=(0.5,0.5,0.5)\), \(\boldsymbol {\beta }_{4}^{T}=\cdots =\boldsymbol {\beta }_{p_{n}}^{T}=(0,0,0)\). \(U_{i}\) follows the uniform distribution on \([0,1]\). To generate covariate \(\mathbf {x}=(\boldsymbol{x}_{1}^{T},\boldsymbol{x}_{2}^{T},\ldots,\boldsymbol{x}_{p_{n}}^{T})^{T}\) with \(\boldsymbol{x}_{j}=(X_{jk},k=1,2,3)^{T}\), we first simulate \(R_{1},\ldots ,R_{3p_{n}}\) independently from the standard normal distribution. Next, simulate \(Z_{j}\), \(j=1,\ldots,p_{n}\), from a multivariate normal distribution with the mean zero and \(\operatorname{Cov}(Z_{j},Z_{l})=0.6^{|j-l|}\). Then the covariates are generated as \(X_{jk}=(Z_{j}+R_{3(j-1)+k})/\sqrt{2}\), \(j=1,\ldots,p_{n}\), \(k=1,2,3\).
We compare the adaptive group bridge (AGB) with the group lasso (GL) and the group bridge (GB). The following three performance measures are calculated:
  1. 1.

    \(L_{2}\) loss of parametric estimate, which is defined as \(\|\widehat {\boldsymbol {\beta }}-\boldsymbol {\beta }_{0}\|\).

  2. 2.

    Average number of nonzero groups identified by the method (NN).

  3. 3.

    Average number of nonzero groups identified by the method that are truly nonzero (NNT).

Group selection results are depicted in Table 1. The numbers in the parentheses in the columns labeled ‘NN’ and ‘NNT’ are the corresponding sample standard deviations based on the 100 runs. Boxplots of the \(L_{2}\) losses under different settings are given in Figures 1-3.
Figure 1
Figure 1

Boxplots of \(\pmb{L_{2}}\) loss for \(\pmb{p_{n}=10}\) .

Figure 2
Figure 2

Boxplots of \(\pmb{L_{2}}\) loss for \(\pmb{p_{n}=30}\) .

Figure 3
Figure 3

Boxplots of \(\pmb{L_{2}}\) loss for \(\pmb{p_{n}=50}\) .

Table 1

Group selection results



σ  = 0.5

σ  = 1

σ  = 4































































































































From Table 1, we can have the following observations:
  1. (1)

    Both GB and AGB perform better than GL for all settings. All these three methods can retain all the true nonzero groups, but GL always keeps more redundant groups that are unrelated with the response than both GB and AGB.

  2. (2)

    AGB performs much better for larger σ and \(p_{n}\). When \(p_{n}=50\) for AGB, groups selected for the case \(\sigma=4\) are about 18.5% lower than that for the case \(\sigma=0.5\). While groups selected for GB decrease by 7.37% in the same situation.

  3. (3)

    For \(p_{n}=10\), GB performs better than AGB, but the stability of GB is bad for \(\sigma=4\).


Figures 1-3 present \(L_{2}\) losses with varying σ and \(p_{n}\). We can see that the performances of estimates are similar for GB and AGB. For \(p_{n}=30\) and 50, both GB and AGB perform better than GL. However, when \(p_{n}=50\), the median of \(L_{2}\) losses for all these three are similar for \(\sigma =0.5\mbox{ and }4\), but the \(L_{2}\) losses of GL fluctuate more widely.

5.2 Wage data analysis

The workers’ wage data from Berndt[18] contains a random sample of 534 observations on 11 variables sampled from the current population survey of 1985. It provides information on wages and other characteristics of the workers, including continuous variables: the number of years of education, years of work experience, age and nominal variables: race, sex, region of residence, occupational status, sector, marital status and union membership. Our goal is to study the important factors for the wage, so it is reasonable to use our proposed method for these data.

From the residual plot, we can easily see that the variance of wages is not a constant. So the log transformation is used to stabilize the variance of wages. Due to the multicollinearity problem between age and experience, we need to get rid of either age or experience. Here we remove the age variable from the model. Xie and Huang [15] analyzed these data without considering the transformation of Y. Furthermore, they did not consider group selection of factors. Similar to Xie and Huang [15], we fit these data using a partially linear model with U being ‘years of work experience’.

Table 2 reports estimated regression coefficients of GL, GB and AGB. All these three methods exclude marital status. We use the first 400 observations as a training dataset to select and fit the model, and use the rest of 134 observations as a testing dataset to evaluate the prediction ability of the selected model. The prediction performance is measured by the median of \(\{|y_{i}-\hat{y}_{i}|, i = 1, 2, \ldots, 134\} \) for GL, GB and AGB using the testing data, respectively. Here \(y_{i}\)’s are those 134 observations in the testing dataset and \(\hat{y}_{i}\)’s are corresponding prediction values. The median absolute prediction errors of GL, GB and AGB are 0.3072, 0.3062 and 0.3022, respectively. Therefore, we can conclude that the AGB gives the smallest prediction error, so it is an attractive technique in group selection.
Table 2

Estimates of the wage data







Number of years of education





1 = southern region, 0 = other





1 = Female, 0 = Male





1 = union member, 0 = nonmember





1 = other, 0 = White




1 = Hispanic, 0 = White





1 = management, 0 = other




1 = sales, 0 = other




1 = clerical, 0 = other




1 = service, 0 = other




1 = professional, 0 = other





1 = manufacturing, 0 = other




1 = construction, 0 = other





1 = married, 0 = other




6 Discussion

This paper studies group selection for high-dimensional partially linear model with the adaptive group bridge method. We also consider the choice of γ in the bridge penalty. It is worth mentioning that we use ‘leave-one-observation-out’ cross-validation to select both λ and γ. This method can significantly reduce the computational burden. This is the first try to use this method in group selection for the partially linear model.



This research was supported by the National Natural Science Foundation of China (Grant No. 11401340).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Statistics, Qufu Normal University, Jingxuan West Road, Qufu, 273165, P.R. China


  1. Tibshirani, R: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267-288 (1996) MathSciNetMATHGoogle Scholar
  2. Frank, I, Friedman, J: A statistical view of some chemometrics regression tools. Technometrics 35, 109-148 (1993) View ArticleMATHGoogle Scholar
  3. Fan, J, Li, R: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348-1360 (2001) MathSciNetView ArticleMATHGoogle Scholar
  4. Zou, H: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101, 1418-1429 (2006) MathSciNetView ArticleMATHGoogle Scholar
  5. Zou, H, Hastie, T: Regularization and variable selection via the elastic net. J. R. Stat. Soc. B 67, 301-320 (2005) MathSciNetView ArticleMATHGoogle Scholar
  6. Fu, W: Penalized regressions: the bridge versus the lasso. J. Comput. Graph. Stat. 7, 397-416 (1998) MathSciNetGoogle Scholar
  7. Knight, K, Fu, W: Asymptotics for lasso-type estimators. J. Comput. Graph. Stat. 28, 1356-1378 (2000) MathSciNetMATHGoogle Scholar
  8. Liu, Y, Zhang, H, Park, C, Ahn, J: Support vector machines with adaptive \(l_{q}\) penalty. Comput. Stat. Data Anal. 51, 6380-6394 (2007) View ArticleMATHGoogle Scholar
  9. Huang, J, Horowitz, J, Ma, S: Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 36, 587-613 (2008) MathSciNetView ArticleMATHGoogle Scholar
  10. Wang, M, Song, L, Wang, X: Bridge estimation for generalized linear models with a diverging number of parameters. Stat. Probab. Lett. 80, 1584-1596 (2010) MathSciNetView ArticleMATHGoogle Scholar
  11. Chen, Z, Zhu, Y, Zhu, C: Adaptive bridge estimation for high-dimensional regression models. J. Inequal. Appl. 2016, 258 (2016) MathSciNetView ArticleMATHGoogle Scholar
  12. Huang, J, Ma, S, Xie, H, Zhang, C: A group bridge approach for variable selection. Biometrika 96, 339-355 (2009) MathSciNetView ArticleMATHGoogle Scholar
  13. Park, C, Yoon, Y: Bridge regression: adaptivity and group selection. J. Stat. Plan. Inference 141, 3506-3519 (2011) MathSciNetView ArticleMATHGoogle Scholar
  14. Engle, R, Granger, C, Rice, J, Weiss, A: Semiparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 81, 310-320 (1986) View ArticleGoogle Scholar
  15. Xie, H, Huang, J: Scad-penalized regression in high-dimensional partially linear models. Ann. Stat. 37, 673-696 (2009) MathSciNetView ArticleMATHGoogle Scholar
  16. Donoho, D, Johnstone, I: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81, 425-455 (1994) MathSciNetView ArticleMATHGoogle Scholar
  17. Wang, L, Li, H, Huang, J: Variable selection in nonparametric varying-coefficient models for analysis of repeated measurements. J. Am. Stat. Assoc. 103, 1556-1569 (2008) MathSciNetView ArticleMATHGoogle Scholar
  18. Berndt, ER: The Practice of Econometrics: Classical and Contemporary. Addison-Wesley, Reading (1991) Google Scholar


© The Author(s) 2017