Skip to main content

Block-sparse recovery and rank minimization using a weighted \(l_{p}-l_{q}\) model

Abstract

In this paper, we study a nonconvex, nonsmooth, and non-Lipschitz weighted \(l_{p}-l_{q}\) (\(0< p\leq 1\), \(1< q\leq 2\); \(0\leq \alpha \leq 1\)) norm as a nonconvex metric to recover block-sparse signals and rank-minimization problems. Using block-RIP and matrix-RIP conditions, we obtain exact recovery results for block-sparse signals and rank minimization. We also obtain the theoretical bound for the block-sparse signals and rank minimization when measurements are degraded by noise.

1 Introduction

The notion of the area of compressed sensing was introduced by Donoho [1] and Candès, Romberg, and Tao [2, 3] in 2006. It has been a rapidly developing area of research in engineering, applied mathematics, computer science, statistics, machine learning, and signal processing. The aim of the compressed sensing is to recover a sparse signal \(y\in \mathbb{R}^{\mathbb{M}}\) from very few nonadaptive linear measurements

$$ z=\mathbf{A}y+\xi , $$
(1)

where \(\mathbf{A}\in \mathbb{R}^{\mathbb{N}\times \mathbb{M}}\) (\(N\ll M\)) is the measurements matrix, \(\xi \in \mathbb{R}^{\mathbb{N}}\) is the additive noise and \(z\in \mathbb{R}^{\mathbb{N}}\) is the measurement vector. When \(\xi =0\), then the sparse signal y is recovered exactly.

If the measurement matrix A satisfies some appropriate conditions in terms of RIP then stable and robust recovery can be obtained by using a nonlinear optimization procedure such as \(l_{1}\) minimization ([4, 5]) given by

$$ \min_{y\in \mathbb{R}^{\mathbb{M}}} \Vert y \Vert _{1} \quad \text{subject to}\quad \Vert \mathbf{A}y-z \Vert _{2}\leq \eta . $$
(2)

In this context, the \(l_{1}\) minimization problem works as a convex relaxation of the \(l_{0}\) minimization problem, which is a NP-hard problem [6]. Thus, it is quite usual to consider following \(l_{p}\) (\(0< p<1\)) minimization (nonconvex) model [7]:

$$ \min_{y\in \mathbb{R}^{\mathbb{M}}} \Vert y \Vert _{p} \quad \text{subject to}\quad \Vert \mathbf{A}y-z \Vert _{2}\leq \eta . $$
(3)

Equation (3) is used to enhance sparsity.

For \(p=1\), (3) reduces to \(\Vert y\Vert _{1}\), which is the convex \(l_{1}\) norm. In practice, nonconvex \(l_{p}\) minimization is more challenging as compared to convex \(l_{1}\) minimization. However, \(l_{p}\) minimization provides reconstruction of the sparse signal from fewer measurements compared to \(l_{1}\) minimization.

The paper is organized as follows: In Sect. 2, we give some notations, which are used throughout the paper. In Sect. 3, we discuss block-sparse recovery along with the block-RIP condition. In Sect. 4, we discuss rank minimization along with the matrix-RIP condition. In Sect. 5, we present the contribution of the present work, while in Sect. 6, we establish our main results. In Sect. 7, we give the conclusions of the paper.

2 Notations

Some useful notations are as follows:

y:

\((y_{1},y_{2},\ldots ,y_{M})^{T} \)

z:

\((z_{1},z_{2},\ldots ,z_{N})^{T}\)

\(\langle y,z\rangle \):

\(\sum_{k=1}^{M}y_{k}z_{k}\) the inner product of y and z

\(\|y\|_{2}\):

\(( \sum_{k=1}^{M}y_{k}^{2} )^{\frac{1}{2}}\) the Euclidean norm

\(\|y\|_{p}\):

\(( \sum_{k=1}^{M}y_{k}^{p} )^{\frac{1}{p}}\) (\(0< p\leq 1\)) the \(l_{p}\) quasinorm

\(\|y\|_{q}\):

\(( \sum_{k=1}^{M}y_{k}^{q} )^{\frac{1}{q}}\) (\(1< q\leq 2\)) the \(l_{q}\) norm

\(y[j]\):

jth block of y

\(\Omega \subseteq \{1,2,\ldots ,n\}\):

block indices

\(\Omega ^{c}\):

complement of Ω

\(y[\Omega ]\in \mathbb{R}^{\mathbb{M}}\):

a vector that equals y on block indices Ω and 0 otherwise

\(\operatorname{supp}(y)\):

\(\{j:\Vert y[j]\Vert _{2}\neq 0\}\) the block support of y

\(\|y\|_{2,p} \):

\(\sum_{j=0}^{n} (\|y[j]\|_{2}^{p} )^{\frac{1}{p}}\) (\(0< p\leq 1\)) the mixed \({l_{2}}/{l_{p}}\) norm

\(\|y\|_{2,q} \):

\(\sum_{j=0}^{n} (\|y[j]\|_{2}^{q} )^{\frac{1}{q}}\) (\(0< p\leq 1\)) the mixed \({l_{2}}/{l_{q}}\) norm

\(\|Y\|_{F}\):

\(\sqrt{\langle Y,Y\rangle}=\sqrt{ \sum_{j}\sigma _{j}^{2}(Y)}\) the Frobenius norm of Y

\(\|Y\|_{p}\):

\(( \sum_{j}\sigma _{j}^{p}(Y) )^{\frac{1}{p}}\) (\(0< p\leq 1\)) the Schatten-p quasinorm of the matrix Y

\(\|Y\|_{q}\):

\(( \sum_{j}\sigma _{j}^{q}(Y) )^{\frac{1}{q}}\) (\(1< q\leq 2\))

\(\sigma _{j}(Y)\):

singular values of matrix Y

\(\bigtriangledown g(y)\):

gradient of g at y

SVD:

singular value decomposition

\(\operatorname{rank}(Y)\):

number of nonzero singular values of Y

Diagonal matrix Y:

\(\operatorname{diag}(\sigma _{1}(Y),\sigma _{2}(Y),\ldots ,\sigma _{m_{1}}(Y))\in \mathbb{R}^{m_{1}\times m_{1}}\), where \(\sigma _{1}(Y)\geq \sigma _{2}(Y)\geq \cdots \geq \sigma _{m_{1}}(Y)\geq 0\)

\(\sigma (Y)\):

\(\sigma _{1}(Y),\sigma _{2}(Y),\ldots ,\sigma _{m_{1}}(Y)\) the singular vector of Y

3 Block-sparse recovery

In many practical applications, we find some real-world signals. These signals have particular structures, where the nonzero coefficients of sparse signals can be classified into blocks. Such signals are known as block-sparse signals. Block-sparse signals arrive in many real-world applications such as equalization of sparse-communication channels, reconstruction of multiband signals, DNA microarrays, source localization, color imaging, motion segmentation, machine learning, and multiresponse linear regression [816].

In the recent past, the block-sparse signals recovery problem has been the center of creative study among researchers [1722].

Consider a signal \(y\in \mathbb{R}^{\mathbb{M}}\), which can be viewed as a chain of blocks over the index set \(\mathcal{I}=\{d_{1}, d_{2},\ldots ,d_{n}\}\), y can be expressed as

$$ y= ({\underbrace{y_{1},\ldots ,y_{d_{1}}}_{y[1]}, \underbrace{y_{d_{1}+1},\ldots ,y_{d_{1}+d_{2}}}_{y[2]}, \underbrace{y_{M-d_{n}+1},\ldots ,y_{M}}_{y[n]}} ) ^{T}, $$
(4)

where \(y[j]\) is the jth block of y having length \(d_{j} \) and \(M= \sum_{j=i}^{n} d_{j}\).

Definition 1

A signal \(y\in \mathbb{R}^{\mathbb{M}}\) is said to be block s-sparse signal over the index set \(\mathcal{I}\) if the number of nonzero signals \(y[j]\) is at most s for \(j\in \{1,2,\ldots ,n\}\).

When \(d_{j}=1~\forall j\), then the block-sparse signals reduce to general sparse signals.

We define

$$ \Vert y \Vert _{2,0}= \sum_{j=0}^{n} \mathcal{I}\bigl( \bigl\Vert y[j] \bigr\Vert _{2}>0\bigr), $$

where \(\mathcal{I}(y)\) denotes an indicator function and is defined by

$$ \mathcal{I}(y)= \textstyle\begin{cases} 1&\text{if } y>0, \\ 0&\text{if } y \leq 0. \end{cases} $$

Thus, we note that y represents a s-block-sparse signal when \(\Vert y\Vert _{2,0}\leq s\).

In block-sparse recovery, one has

$$ z=\mathbf{A}y+\xi , $$
(5)

where \(\mathbf{A}\in \mathbb{R}^{\mathbb{N}\times \mathbb{M}}\) (\(N\ll M\)) is the measurements matrix, \(\xi \in \mathbb{R}^{\mathbb{N}}\) is the additive noise, and \(z\in \mathbb{R}^{\mathbb{N}}\) is the measurement vector and we have to recover an unknown y from z and matrix A.

The \(\frac{l_{2}}{l_{0}}\) minimization model is given by:

$$ \min_{y\in \mathbb{R}^{M}} \Vert y \Vert _{2,0} \quad \text{subject to} \quad \Vert \mathbf{A}y-z \Vert _{2}\leq \eta . $$
(6)

Equation (6) is the direct method to recover a block-sparse signal, which is similar to the method of solving a standard compressed sensing problem (1). Since minimization model (6) is also NP-hard, an efficient method for block-sparse recovery is to use the following \(\frac{l_{2}}{l_{p}}\) (\(0< p\leq 1\)) minimization model in place of the \(\frac{l_{2}}{l_{0}}\) minimization model:

$$ \min_{y\in \mathbb{R}^{M}} \Vert y \Vert _{2,p} \quad \text{subject to}\quad \Vert \mathbf{A}y-z \Vert _{2}\leq \eta , $$
(7)

where the \(\frac{l_{2}}{l_{p}}\) norm is defined as \(\|y\|_{2,p}= \sum_{j=0}^{n} (\|y[j]\|_{2}^{p} )^{\frac{1}{p}}\).

We note that the \(\frac{l_{2}}{l_{p}}\) minimization model is a generalization of the standard \(l_{p}\) minimization model (3).

Equation (7) is a convex minimization model for \(p=1\) and a nonconvex minimization model for \(0< p\leq 1 \).

We note that (7) reduces to the convex \(\frac{l_{2}}{l_{1}}\) minimization model.

Extending the concept to the usual RIP condition ([23]), Eldar and Mishali [24] gave the concept of block-RIP. In fact, block-RIP is defined as follows:

Definition 2

(Block-RIP ([24]))

Let \(\mathbf{A}:\mathbb{R}^{\mathbb{M}}\rightarrow \mathbb{R}^{\mathbb{N}}\) be a \(N\times M\) measurements matrix. Then, A is said to be the block-RIP over the index set \(\mathcal{I}\) with constant \(\delta _{s/\mathcal{I}}\) if for every vector \(y\in \mathbb{R}^{\mathbb{M}}\) that is block s-sparse over the index set \(\mathcal{I}\), and it satisfies

$$ (1-\delta _{s/\mathcal{I}}) \Vert y \Vert _{2}^{2} \leq \Vert \mathbf{A}y \Vert _{2}^{2} \leq (1+\delta _{s/\mathcal{I}}) \Vert y \Vert _{2}^{2}. $$
(8)

In the above definition, it is also exhibited that the new block-RIP constant is generally smaller than the usual RIP constant. Using the concept of block-RIP, Eldar and Mishali [24] obtained the theoretical bound for a block-s sparse signal by the \(\frac{l_{2}}{l_{1}}\) minimization model.

4 Rank minimization

Rank minimization aries in many real-world applications like machine learning, collaborative filtering, remote sensing, computer vision, pattern recognition, data mining, low-dimensional embedding, and wavelet analysis [2536]. Rank-minimization or low-rank matrix recovery aims to recover a low-rank matrix from its linear measurements

$$ b=\mathcal{A}(Y)+\xi , $$
(9)

where \(\mathcal{A}: \mathbb{R}^{m_{1}\times m_{2}}\rightarrow \mathbb{R}^{m}\) (\(m\ll m_{1}m_{2}\)) denotes a measurement map (linear), \(\xi \in \mathbb{R}^{m}\) denotes a vector of measurement errors, \(b\in \mathbb{R}^{m}\) denotes the observation vector, and \(Y\in \mathbb{R}^{m_{1}\times m_{2}}\) denotes the unknown matrix, which is aimed to be recovered using \(\mathcal{A}\) and b. As a special case, it aims to recover a low-rank matrix from a subset of its entries (see [37]).

The rank-minimization problem belongs to the problem that arises in compressed sensing. We can observe that unknown matrix Y reduces to a diagonal matrix when it belongs to (9). Thus, the problem (9) reduces to the problem (1).

The rank-minimization model is given by

$$ \min_{Y\in \mathbb{R}^{m_{1}\times m_{2}}} \operatorname{rank}(Y) \quad \text{subject to}\quad \bigl\Vert \mathcal{A}(Y)-b \bigr\Vert _{2}\leq \xi , $$
(10)

where \(\xi \geq 0\) is an additive noise. The rank-minimization model (10) given by Fazal [38], is the direct method to solve (9) and similar to the method of solving a standard compressed sensing problem (1). Since, similar to the \(l_{0}\) minimization model, (10) is also NP-hard [39], one can use the following rank-minimization model [40] in place of (10):

$$ \min_{Y\in \mathbb{R}^{m_{1}\times m_{2}}} \Vert Y \Vert _{p} \quad \text{subject to}\quad \bigl\Vert \mathcal{A}(Y)-b \bigr\Vert _{2}\leq \xi , $$
(11)

where \(\Vert Y\Vert _{p}= ( \sum_{j}\sigma _{j}^{p}(Y) )^{\frac{1}{p}}\) and \(\sigma _{j}(Y)\) are the singular values of matrix Y. The rank-minimization model (11), which is an extension of (3), is called the nuclear norm minimization when \(p=1\) and it is denoted by \(\Vert Y\Vert _{*} \).

In the context of the rank-minimization problem, we recall the RIP condition for the matrix case that appeared in [41]. In fact, the matrix-RIP is defined as follows:

Definition 3

(Matrix-RIP ([41]))

Let \(\mathcal{A}:\mathbb{R}^{m_{1}\times m_{2}}\rightarrow \mathbb{R}^{m}\) be a linear measurement map and \(1\leq k\leq \min ({m_{1},m_{2}})\) be an integer. The matrix restricted isometry constant (RIC) of order k for \(\mathcal{A}\) is defined as the smallest number \(\delta _{k}\) that for all matrices \(Y\in \mathbb{R}^{m_{1}\times m_{2}}\) with rank at most k,

$$ (1-\delta _{k})\vert Y\Vert _{F}^{2} \leq \Vert \mathcal{A}y \Vert _{2}^{2} \leq (1+\delta _{k})\vert Y\Vert _{F}^{2}. $$
(12)

5 Contribution

Esser et al. [42] and Yin et al. [43] produced some systematic and interesting studies on a nonsmooth, nonconvex, and Lipschitz continuous \(l_{1}-l_{2}\) minimization model. Inspired by Esser et al. [42] and Yin et al. [43]; Wang and Zhang [44] proposed the \(l_{1}-l_{p}\) (\(1< p\leq 2\)) minimization model for recovering general sparse signals using projected neural-network algorithms. Zhao et al. [45] represented a more general nonsmooth, nonconvex, and non-Lipschitz sparse-signal recovery model \(l_{p}-l_{q}\) (\(0< p\leq 1\), \(1< q\leq 2\)), which is an extension of the models represented by Esser et al. [42], Yin et al. [43], and Wang and Zhang [44].

In this work, we propose a weighted \(l_{p}-l_{q}\) (\(0< p\leq 1\), \(1< q\leq 2\); \(0\leq \alpha \leq 1\)) minimization model to obtain our recovery results. In fact, in this paper, we study the problems of block-sparse signals and rank minimization using a nonconvex, nonsmooth, and non-Lipschitz weighted \(l_{p}-l_{q}\) (\(0< p\leq 1\), \(1< q\leq 2\); \(0\leq \alpha \leq 1\)) minimization model.

We consider the following nonconvex, nonsmooth, and non-Lipschitz weighted \(l_{p}-l_{q}\) minimization for block-sparse signal recovery:

$$ g(y)=\min_{y\in \mathbb{R}^{\mathbb{M}}} \Vert y \Vert _{2,p}-\alpha \Vert y \Vert _{2,q} \quad \text{subject to}\quad \Vert \mathbf{A}y-z \Vert _{2}\leq \xi , $$
(13)

where \(0< p\leq 1\), \(1< q\leq 2\) and \(0\leq \alpha \leq 1\).

We also consider the following nonsmooth, nonconvex, and non-Lipschitz weighted \(l_{p}-l_{q}\) minimization to solve the rank-minimization problem:

$$ G(Y)=\min_{Y\in \mathbb{R}^{m_{1}\times m_{2}}} \Vert Y \Vert _{p}-\alpha \Vert Y \Vert _{q}\quad \text{subject to} \quad \bigl\Vert \mathcal{A}(Y)-b \bigr\Vert _{2}\leq \xi , $$
(14)

where \(0< p\leq 1\), \(1< q\leq 2\) and \(0\leq \alpha \leq 1\).

6 Main results

In this section, we shall obtain recovery results for block-sparse signals via the minimization model (13) using the block-RIP condition. We shall also obtain recovery results for the rank-minimization problem via the minimization model (14) using the matrix-RIP condition.

6.1 Block-sparse recovery results

Theorem 6.1

(Noisy Recovery)

Let \(\mathbf{A}\in \mathbb{R}^{\mathbb{N}\times \mathbb{M}}\) be a measurement matrix and \(y\in \mathbb{R}^{\mathbb{M}}\) be a nearly block s-sparse signal and \(z=\mathbf{A}y+\xi \) with \(\|\xi \|_{2}\leq \eta \). Assume that \(b>0\) and chose it properly such that bs is an integer. If

$$ a= \frac{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}}{(s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}}}>1 $$
(15)

and A satisfies the condition

$$ \delta _{bs}+a\delta _{(b+1)s}< a-1, $$
(16)

then any solution ŷ of the block weighted \(l_{p}-l_{q}\) minimization model (13) obeys

$$ \Vert y-\hat{y} \Vert _{2}\leq \mathbf{C}_{1}\eta + \mathbf{C}_{2} \frac{1}{(s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}}} \Vert y-y_{s} \Vert _{2,p}, $$

where \(\mathbf{C}_{1}= \frac {2(a+1)\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}\), \(\mathbf{C}_{2}= \frac {2({2-\delta _{(b+1)s}}+\delta _{bs})}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}\) and \(y_{s}\) is the best block s-sparse approximation of y.

First, we prove a following lemma, which is crucial for proving our Theorem 6.1.

Lemma 6.2

If for arbitrary \(y\in \mathbb{R}^{\mathbb{M}}\), \(y=( \underbrace{y_{1},\ldots ,y_{d_{1}}}_{y[1]}, \underbrace{y_{d_{1}+1},\ldots ,y_{d_{1}+d_{2}}}_{y[2]}, \underbrace{y_{M-d_{n}+1},\ldots ,y_{M}}_{y[n]} ) ^{T}\), \(0 \leq \alpha \leq 1\), \(0 < p \leq 1 \) and \(1< q\leq 2\), then

$$ \bigl(n^{\frac{1}{p}}-\alpha{n^{\frac{1}{q}}}\bigr) \Bigl( \min _{j \in{[n]}} \bigl\Vert y{[j]} \bigr\Vert _{2} \Bigr) \leq \Vert y \Vert _{2,p}-\alpha \Vert y \Vert _{2,q} \leq \bigl({n^{ \frac{1}{p}-\frac{1}{q}}}-\alpha \bigr) \Vert y \Vert _{2,q}. $$
(17)

In particular, if the block support of solution ŷ is given by \(\mathcal{R}\subseteq [n]\); \(\vert \mathcal{R}\vert =r\), then

$$ \bigl(r^{\frac{1}{p}}-\alpha r^{\frac{1}{q}}\bigr) \Bigl( \min _{j \in{[\mathcal{R}]}} \bigl\Vert {y} {[j]} \bigr\Vert _{2} \Bigr) \leq \Vert {y} \Vert _{2,p}-\alpha \Vert {y} \Vert _{2,q} \leq \bigl(r^{\frac{1}{p}-\frac{1}{q}}-\alpha \bigr) \Vert {y} \Vert _{2,q}. $$
(18)

Proof

We can easily find the right-hand side of inequality (17) using Hölder’s inequality and the norm inequality \(\|y\|_{2,p}\leq {n^{\frac{1}{p}-\frac{1}{q}}}\|y\|_{2,q}\) for any block signal \(y\in \mathbb{R}^{\mathbb{M}}\). Thus, using the above fact, we obtain

$$ \Vert y \Vert _{2,p}-\alpha \Vert y \Vert _{2,q}\leq \bigl({n^{\frac{1}{p}-\frac{1}{q}}}- \alpha \bigr) \Vert y \Vert _{2,q}. $$
(19)

Now, we find the left side of inequality (17). For \(n=1\), (17) is true. Now, for \(n>s>1\), we suppose \(y[j]\geq 0\), \(j=1,2,\ldots n\), then

$$ \bigtriangledown _{y[j]}g(y)= \bigl\Vert y[j] \bigr\Vert _{2}^{(p-1)} \Biggl( \sum_{j=1}^{{n}} \bigl\Vert y[j] \bigr\Vert _{2}^{p} \Biggr)^{\frac{1}{p}-1}- \alpha \bigl\Vert y[j] \bigr\Vert _{2}^{(q-1)} \Biggl( \sum _{j=1}^{{n}} \bigl\Vert y[j] \bigr\Vert _{2}^{q} \Biggr)^{ \frac{1}{q}-1}>0, $$

where \(\|y[j]\|_{2}^{(p-1)} ( \sum_{j=1}^{{n}}\|y[j]\|_{2}^{p} )^{\frac{1}{p}-1}>1\) and \(\|y[j]\|_{2}^{(p-1)} ( \sum_{j=1}^{{n}}\|y[j]\|_{2}^{p} )^{\frac{1}{p}-1}<1\).

Hence, \(g(y)\) is a monotonic increasing function with respect to \(y[j]\). Consequently, \(g(y)\geq g (\min_{j\in [n]}y[j],\ldots ,\min_{j \in [ n]}y[j] )\).

Thus,

$$ \Vert y \Vert _{2,p}-\alpha \Vert y \Vert _{2,q}\geq \bigl({n^{\frac{1}{p}}}-\alpha {n^{ \frac{1}{q}}}\bigr) \Bigl( \min _{j\in{[n]}} \bigl\Vert y[j] \bigr\Vert _{2} \Bigr). $$

Inequality (18) is obeyed as we apply (17) to ŷ. □

Proof of Theorem 6.1

Let \(\mathbf{v}=\hat{y}-y\) be a solution of the minimization model (13) and \(\mathcal{T}_{0}\) be the index set of the block over the s blocks with the largest \(l_{2}\) norm of y. Let us decompose v into a series of vectors \(\mathbf{v_{\mathcal{T}_{0}}},\mathbf{v}_{\mathcal{T}_{1}},\ldots , \mathbf{v}_{\mathcal{T}_{J}}\) such that \(\mathbf{v}= \sum_{j=0}^{J}\mathbf{v}_{\mathcal{T}_{j}}\) and \(\mathcal{T}_{j}\) is the restriction of v to the set \(\mathcal{T}_{j}\), \(\mathcal{T}_{0}\) contains s blocks and each \(\mathcal{T}_{j}\) (\(j\geq 1\)) contains bs blocks (except possibly \(\mathcal{T}_{J}\)). Arrange the block indices such that \(\|\mathbf{v}_{\mathcal{T}_{J}[1]}\|_{2}\geq \mathbf{v}_{\mathcal{T}_{J}}[2]\|_{2} \geq \cdots \mathbf{v}_{\mathcal{T}_{J}}[bs]\|_{2}\geq \mathbf{v}_{ \mathcal{T}_{J+1}}[1]\|_{2}\geq \mathbf{v}_{\mathcal{T}_{J+2}}[2]\|_{2} \geq \cdots \) , \(\forall J\geq 1\).

Since ŷ is a minimizer of minimization model (13), we have

$$\begin{aligned} \Vert y_{\mathcal{T}_{0}} \Vert _{2,p}+ \Vert y_{\mathcal{T}_{0}^{c}} \Vert _{2,p}- \alpha \Vert y_{\mathcal{T}_{0}} \Vert _{2,q}={}& \Vert y \Vert _{2,p}-\alpha \Vert y \Vert _{2,q} \\ \geq {}& \Vert \hat{y} \Vert _{2,p}-\alpha \Vert \hat{y} \Vert _{2,q} \\ \geq {}& \Vert y+\mathbf{v} \Vert _{2,p}-\alpha \Vert y+ \mathbf{v} \Vert _{2,q} \\ \geq {}& \Vert y_{\mathcal{T}_{0}}+\mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,p}+ \Vert y_{ \mathcal{T}_{0}^{c}}+\mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,p} \\ &{}-\alpha \bigl( \Vert y+\mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,q}+ \Vert \mathbf{v}_{ \mathcal{T}_{0}^{c}} \Vert _{2,q}\bigr) \\ \geq {}& \Vert y_{\mathcal{T}_{0}} \Vert _{2,p}- \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,p}- \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p}+ \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,p} \\ {}&-\alpha \Vert y+\mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,q}- \alpha \Vert \mathbf{v}_{ \mathcal{T}_{0}^{c}} \Vert _{2,q} \\ \geq {}& \Vert y_{\mathcal{T}_{0}} \Vert _{2,p}- \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,p}- \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p}+ \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,p} \\ &{}-\alpha \Vert y \Vert _{2,q}-\alpha \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,q}- \alpha \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,q}. \end{aligned}$$
(20)

Therefore,

$$\begin{aligned} \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,p}-\alpha \Vert \mathbf{v}_{ \mathcal{T}_{0}^{c}} \Vert _{2,q}\leq \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,p}+ \alpha \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,q}+2 \Vert y_{\mathcal{T}_{0}^{c}} \Vert _{2,p}. \end{aligned}$$
(21)

Now, we will set a new upper bound for \(\sum_{j\geq 2}\|\mathbf{v}_{\mathcal{T}_{j}}\|_{2}\). Using Lemma 6.2, for each \(t\in \mathcal{T}_{j}\), \(r\in \mathcal{T}_{j-1}\) and \(j\geq 2\), we have

$$ \bigl\Vert \mathbf{v}_{\mathcal{T}_{j}}[t] \bigr\Vert _{2}\leq \min_{r\in \mathcal{T}_{j-1}} \bigl\Vert \mathbf{v}_{\mathcal{T}_{j-1}}[r] \bigr\Vert _{2}\leq \frac{ \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,p}-\alpha \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,q}}{(bs)^{\frac{1}{p}}-\alpha (bs)^{\frac{1}{q}}} .$$
(22)

Then, we have

$$\begin{aligned} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2}&\leq (bs)^{\frac{1}{2}} \frac{ \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,p}-\alpha \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,q}}{(bs)^{\frac{1}{p}}-\alpha (bs)^{\frac{1}{q}}} \\ &= \frac{ \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,p}-\alpha \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,q}}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$

Hence, it follows that

$$\begin{aligned} \sum_{j\geq 2} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2}&\leq \frac{ \sum_{j\geq 2} ( \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,p}-\alpha \Vert \mathbf{v}_{\mathcal{T}_{j-1}} \Vert _{2,q} )}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}} \\ &= \frac{ \sum_{j\geq 1} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2,p}-\alpha \sum_{j\geq 1} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2,q}}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$
(23)

We note that

$$\begin{aligned} \sum_{j\geq 1} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2,p} \leq \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,p}\quad \text{and} \quad \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,q}\leq \sum _{j\geq 1} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2,q}. \end{aligned}$$
(24)

Using (24) in (23), we have

$$\begin{aligned} \sum_{j\geq 2} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2}&\leq \frac{ \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,p}-\alpha \Vert \mathbf{v}_{\mathcal{T}_{0}^{c}} \Vert _{2,q}}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$
(25)

Using (21) in (25), we have

$$\begin{aligned} \sum_{j\geq 2} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2}&\leq \frac{ \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,p}+\alpha \Vert \mathbf{v}_{\mathcal{T}_{0}} \Vert _{2,q}+2 \Vert y_{\mathcal{T}_{0}^{c}} \Vert _{2,p}}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}} \\ &= \frac{ ((s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}} ) \Vert \mathbf{v}_{\mathcal{T}_{0}}+\mathbf{v}_{\mathcal{T}_{1}} \Vert _{2}+2 \Vert y_{\mathcal{T}_{0}^{c}} \Vert _{2,p}}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$
(26)

By the feasibility of y and ŷ, we have

$$\begin{aligned} \Vert \mathbf{A}\mathbf{v} \Vert _{2}&= \Vert \mathbf{A} {y-\hat{y}} \Vert _{2} \\ &= \bigl\Vert ( \mathbf{A}y-z)-(\mathbf{A} \hat{y}-z) \bigr\Vert _{2} \\ &\leq \Vert \mathbf{A}y-z \Vert _{2}+ \Vert \mathbf{A} \hat{y}-z \Vert _{2} \\ &\leq 2\eta . \end{aligned}$$
(27)

Using the RIP condition (8), we have

$$\begin{aligned} \Vert \mathbf{A}\mathbf{v} \Vert _{2}&= \biggl\Vert \mathbf{A} ({\mathbf{v}_{ \mathcal{T}_{0}}}+{\mathbf{v}_{\mathcal{T}_{1}}} )+\sum _{j\geq 2} \mathbf{A} {\mathbf{v}_{\mathcal{T}_{j}}} \biggr\Vert _{2} \\ &\geq \bigl\Vert \mathbf{A} ({\mathbf{v}_{\mathcal{T}_{0}}}+{ \mathbf{v}_{ \mathcal{T}_{1}}} ) \bigr\Vert _{2}-\sum _{j\geq 2} \Vert \mathbf{A} {\mathbf{v}_{ \mathcal{T}_{j}}} \Vert _{2} \\ &\geq ({1-\delta _{(b+1)s}}) \Vert \mathbf{v}_{\mathcal{T}_{0}}+{ \mathbf{v}_{\mathcal{T}_{1}}} \Vert _{2}-({1+\delta _{bs}})\sum _{j\geq 2} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2}. \end{aligned}$$
(28)

Using (26) in (28), we have

$$\begin{aligned} \Vert \mathbf{A}\mathbf{v} \Vert _{2}\geq {}&({1-\delta _{(b+1)s}}) \Vert \mathbf{v}_{ \mathcal{T}_{0}}+{\mathbf{v}_{\mathcal{T}_{1}}} \Vert _{2}-({1+\delta _{bs}}) \\ &{}\times \frac{ ((s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}} ) \Vert \mathbf{v}_{\mathcal{T}_{0}}+\mathbf{v}_{\mathcal{T}_{1}} \Vert _{2}+2 \Vert y_{\mathcal{T}_{0}^{c}} \Vert _{2,p}}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}} \\ ={}& \biggl({1-\delta _{(b+1)s}}-\frac{({1+\delta _{bs}})}{a} \biggr) \Vert \mathbf{v}_{\mathcal{T}_{0}}+{\mathbf{v}_{\mathcal{T}_{1}}} \Vert _{2}- \frac{2({1+\delta _{bs}})}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p}. \end{aligned}$$
(29)

Therefore, if \(\delta _{bs}+a\delta _{(b+1)s}< a-1 \), then it yields that

$$\begin{aligned} \Vert \mathbf{v}_{\mathcal{T}_{0}}+{\mathbf{v}_{\mathcal{T}_{1}}} \Vert _{2} \leq {}& \frac{1}{{1-\delta _{(b+1)s}}-\frac{({1+\delta _{bs}})}{a}} \Vert \mathbf{A}\mathbf{v} \Vert _{2}+ \frac{2(1+\delta _{bs})}{{1-\delta _{(b+1)s}}-\frac{({1+\delta _{bs}})}{a}} \\ &{}\times \frac{1}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \\ \leq {}& \frac{2a\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \\ &{}+ \frac{2a(1+\delta _{bs}) ((bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}- \frac{1}{2}} )^{-1}}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p}. \end{aligned}$$
(30)

Using (30) in (26), we have

$$\begin{aligned} \sum_{j\geq 2} \Vert \mathbf{v}_{\mathcal{T}_{j}} \Vert _{2}\leq {}&\frac{1}{a} \biggl[\frac{2a\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \\ &{} + \frac{2a(1+\delta _{bs}) ((bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}} )^{-1}}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \biggr] \\ &{}+ \frac{2}{(bs)^{\frac{1}{p}-\frac{1}{2}}-\alpha (bs)^{\frac{1}{q}-\frac{1}{2}}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \\ \leq{} &\frac{2\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \\ &{}+ \frac{2(1-\delta _{(b+1)s})((k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (k)^{\frac{1}{q}-\frac{1}{2}})^{-1}}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p}. \end{aligned}$$
(31)

We know that

$$ \Vert \mathbf{v} \Vert _{2}\leq \Vert \mathbf{v}_{\mathcal{T}_{0}}+\mathbf{v}_{ \mathcal{T}_{1}} \Vert _{2}+\sum _{j\geq 2} \Vert \mathbf{v}_{\mathcal{T}_{i}} \Vert _{2}. $$
(32)

Combining (26), (30), and (32), we have

$$\begin{aligned} \Vert \mathbf{v} \Vert _{2}\leq {}& \frac{2a\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}+ \frac{2(1+\delta _{bs}) ((s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}} )^{-1}}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \\ &{}+\frac{2\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}+ \frac{2(1-\delta _{(b+1)s})((s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}})^{-1}}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \\ \leq {}&\frac{2(a+1)\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}+ \frac{2({2-\delta _{(b+1)s}}+\delta _{bs})((s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}})^{-1}}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \\ ={}&\mathbf{C}_{1}\eta +\mathbf{C}_{2} \frac{1}{(s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}}} \Vert y_{ \mathcal{T}_{0}^{c}} \Vert _{2,p} \\ ={}&\mathbf{C}_{1}\eta +\mathbf{C}_{2} \frac{1}{(s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}}} \Vert y-y_{s} \Vert _{2,p}, \end{aligned}$$
(33)

where \(\mathbf{C}_{1}= \frac {2(a+1)\eta}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}\) and \(\mathbf{C}_{2}= \frac {2({2-\delta _{(b+1)s}}+\delta _{bs})}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}\). □

In the following corollary, we find the exact block-sparse recovery result of the weighted \(l_{p}-l_{q}\) minimization model (13) using the block-RIP condition when the noise vector \(\eta = 0\),

Corollary 6.3

(Noiseless Recovery)

If all the conditions in Theorem 6.1hold and the noise vector \(\eta =0\) then any solution ŷ of the block-weighted \(l_{p}-l_{q}\) minimization model (13) obeys

$$ \Vert y-\hat{y} \Vert _{2}\leq \mathbf{C}_{2} \frac{1}{(s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}}} \Vert y-y_{s} \Vert _{2,p}, $$

where \(\mathbf{C}_{2}= \frac {2({2-\delta _{(b+1)s}}+\delta _{bs})}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}\). In particular, if y is the block s-sparse then the recovery result is exact.

Proof

By putting the noise vector \(\eta =0\) in (33), we have

$$ \Vert \mathbf{v} \Vert _{2}\leq \mathbf{C}_{2} \frac{1}{(s)^{\frac{1}{p}-\frac{1}{2}}+\alpha (s)^{\frac{1}{q}-\frac{1}{2}}} \Vert y-y_{s} \Vert _{2,p}, $$

where \(\mathbf{C}_{2}= \frac {2({2-\delta _{(b+1)s}}+\delta _{bs})}{{a-a\delta _{(b+1)s}}-1-\delta _{bs}}\). If the vector y is the block s-sparse then we find the exact recovery result, i.e., \(\mathbf{v}=0\). □

6.2 Rank-minimization results

Theorem 6.4

(Noisy Recovery)

Let \(\mathcal{A}:\mathbb{R}^{m_{1}\times m_{2}}\rightarrow \mathbb{R}^{m}\) be a linear measurement map and \(1\leq k\leq \min \{m_{1},m_{2}\}\) be an integer. Let \(Y\in \mathbb{R}^{m_{1}\times m_{2}}\) be a rank-k matrix and \(b=\mathcal{A}(Y)+\xi \) with \(\|\xi \|_{2}\leq \eta \). Assume that \(\mu >0\) and chose it properly such that μk is an integer,

$$ \mathcal{C}= \frac{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}}{(2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}}}>1 $$
(34)

and \(\mathcal{A}\) satisfies the RIP condition (12) with

$$ \delta _{\mu k}+\mathcal{C}\delta _{(\mu +2)k}< \mathcal{C}-1, $$
(35)

then any solution Ŷ of the matrix-weighted \(l_{p}-l_{q}\) minimization model (14) obeys

$$ \Vert Y-\hat{Y} \Vert _{2}\leq \mathbf{C}_{3}\eta + \mathbf{C}_{4} \frac{1}{(2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y-Y_{k} \Vert _{p}, $$

where \(\mathbf{C}_{3}= \frac {2(\mathcal{C}+1)\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}\), \(\mathbf{C}_{4}= \frac {2({2-\delta _{(\mu +2)k}}+\delta _{\mu k})}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}\) and \(Y_{[k]}\) is the best rank-k approximation of Y.

First, we prove the following lemma, which is crucial for proving our Theorem 6.4.

Lemma 6.5

If for arbitrary \(Y\in \mathbb{R}^{m_{1}\times m_{2}}\) (\(m_{1}\leq m_{2}\)), \(0 \leq \alpha \leq 1\), \(0 < p \leq 1 \) and \(1< q\leq 2\), then we have

$$ \bigl(m_{1}^{\frac{1}{p}}-\alpha{m_{1}^{\frac{1}{q}}} \bigr) \bigl( \sigma _{m_{1}}(Y) \bigr)\leq \Vert Y \Vert _{p}-\alpha \Vert Y \Vert _{q}\leq \bigl({m_{1}^{ \frac{1}{p}-\frac{1}{q}}}- \alpha \bigr) \Vert Y \Vert _{q}. $$
(36)

In particular, if the support of \(\sigma (Y)\) is given by \(\Omega \subseteq [m_{1}]\); \(|\Omega |=\lambda \), then

$$ \bigl(\lambda ^{\frac{1}{p}}-\alpha \lambda ^{\frac{1}{q}}\bigr) ( \sigma _{\lambda} )\leq \Vert Y \Vert _{p}-\alpha \Vert Y \Vert _{q} \leq \bigl(\lambda ^{\frac{1}{p}-\frac{1}{q}}-\alpha \bigr) \Vert Y \Vert _{q}. $$
(37)

Proof

We can easily find the right-hand side of inequality (36) using Hölder’s inequality and the norm inequality \(\|Y\|_{p}\leq {m_{1}^{\frac{1}{p}-\frac{1}{q}}}\|Y\|_{q}\) for any \(Y\in \mathbb{R}^{m_{1}\times m_{2}}\). Thus, using the above fact, we obtain

$$ \Vert Y \Vert _{p}-\alpha \Vert Y \Vert _{q}\leq \bigl({m_{1}^{\frac{1}{p}-\frac{1}{q}}}- \alpha \bigr) \Vert Y \Vert _{q}. $$
(38)

Now, we find the left side of inequality (36). For \(m_{1}=1\), (36) is true. Now, for \(m_{1}>\lambda >1\), we suppose \(\sigma _{j}\geq 0\), \(j=1,2,\ldots m_{1}\), then

$$ \bigtriangledown _{\sigma _{j}}G(Y)=\sigma _{j}^{(p-1)} \Biggl( \sum_{j=1}^{{m_{1}}}\sigma _{j}^{p} \Biggr)^{\frac{1}{p}-1}- \alpha \sigma _{j}^{(q-1)} \Biggl( \sum_{j=1}^{{m_{1}}} \sigma _{j}^{q} \Biggr)^{\frac{1}{q}-1}>0, $$

where \(\sigma _{j}^{(p-1)} ( \sum_{j=1}^{{m_{1}}}\sigma _{j}^{p} )^{\frac{1}{p}-1}>1\) and \(\sigma _{j}^{(p-1)} ( \sum_{j=1}^{{m_{1}}}\sigma _{j}^{p} )^{\frac{1}{p}-1}<1\).

Hence, \(G(Y)\) is a monotonic increasing function with respect to \(\sigma _{j}\). Consequently, \(G(Y)\geq G (\min_{j\in [m_{1}]}\sigma _{j}, \ldots ,\min_{j\in [ m_{1}]}\sigma _{j} )\).

Thus,

$$ \Vert Y \Vert _{p}-\alpha \Vert Y \Vert _{q}\geq \bigl({m_{1}^{\frac{1}{p}}}-\alpha {m_{1}^{ \frac{1}{q}}} \bigr) \bigl( \sigma _{m _{1}}(Y) \bigr). $$

Inequality (6.5) is obeyed as we apply (36) to matrix Y. □

Proof of Theorem 6.4

Let \(\mathbf{W}=Y-\hat{Y}\) and \(Y_{c}=Y-Y_{[k]}\). The decomposition of W with respect to the singular value decomposition of Y is given by

Y= U T WV= ( W 11 W 12 W 21 W 22 ) ,

where \(U\in \mathbb{R}^{m_{1}\times m_{1}}\) and \(V\in \mathbb{R}^{m_{2}\times m_{2}}\) are the unitary matrices and \(\mathbf{W}_{11}\in \mathbb{R}^{k\times k}\), \(\mathbf{W}_{12}\in \mathbb{R}^{k\times (m_{1}-k)}\), \(\mathbf{W}_{21}\in \mathbb{R}^{(m_{1}-k)\times k}\), \(\mathbf{W}_{22}\in \mathbb{R}^{(m_{1}-k)\times (m_{1}-k)}\). Then, we can decompose W as \(\mathbf{W}=\mathbf{W}_{0}+\mathbf{W}_{c}\), where

W 0 =U ( W 11 W 12 W 21 0 ) V T ,

and

W c =U ( 0 0 0 W 22 ) V T .

For any \(1\leq k\leq m_{1}\), the best rank-k approximation \(Y_{[k]}\) of Y is defined by

Y [ k ] =U ( diag ( σ [ k ] ( Y ) 0 0 0 ) V T = U [ k ] W [ k ] V [ k ] T ,

where \(\sigma _{[k]}(Y)=(\sigma _{1}(Y),\sigma _{2}(Y),\ldots ,\sigma _{k}(Y))\) and \(\mathbf{W}_{[k]}\in \mathbb{R}^{m_{1}\times m_{1}}\) is a diagonal matrix in which the first k largest singular values of Y are on its diagonal.

\(U_{[k]}\in \mathbb{R}^{m_{1}\times m_{1}}\) and \(V_{[k]}\in \mathbb{R}^{m_{2}\times m_{2}}\) contain the k singular (both left and right) vectors of Y corresponding to the first k largest singular values, respectively. Clearly, rank \((\mathbf{W}_{0}) \leq 2 k\) \(Y_{[k]}\mathbf{W}_{c}^{T}=0\) and \(Y_{[k]}\mathbf{W}_{c}=0\). Consider the singular value decomposition of \(\mathbf{W}_{22}\) given by

$$ \mathbf{W}_{22}=U\operatorname{diag} \bigl(\sigma (\mathbf{W}_{22}) \bigr)V^{T}, $$

where \({U},V\in \mathbb{R}^{(m_{1}-k)\times (m_{1}-k)}\) are the orthogonal matrices.

Also, \(\sigma (\mathbf{W}_{22})=(\sigma _{1}(\mathbf{W}_{22}),\sigma _{2}( \mathbf{W}_{22}),\ldots ,\sigma _{m_{1}-k}(\mathbf{W}_{22}))^{T}\) denotes the vector of singular value decomposition of \(\mathbf{W}_{22}\), where \(\sigma _{1}(\mathbf{W}_{22})\geq \sigma _{2}(\mathbf{W}_{22})\geq \cdots \geq \sigma _{m_{1}-k}(\mathbf{W}_{22}) \geq 0\). Therefore, \(\sigma (\mathbf{W}_{22})\) can be decomposed into the sum of vectors \(\sigma ^{l}(\mathbf{W})\) (\(l=1,2,\ldots \)) of support length μk defined by

$$ \sigma ^{l}_{j}(\mathbf{W}_{22})= \textstyle\begin{cases} \sigma _{j}(\mathbf{W}_{22})&\text{if } (l-1)\mu{k}< j\leq l\mu k, \\ 0 & \text{otherwise}. \end{cases} $$

Clearly, \(\sum_{l}\sigma ^{l}(\mathbf{W}_{22})=\sigma ( \mathbf{W}_{22})\). For \(l=1,2,\ldots \) , we set the matrix

W l =U ( 0 0 0 U diag ( σ l ( W 22 ) ) V T . ) V T .

Then, rank \((\mathbf{W}_{0})\leq \mu k\), \(\mathbf{W}_{c}= \sum_{l\geq 1}\mathbf{W}_{l}\) and \(\langle \mathbf{W}_{0},\mathbf{W}_{l}\rangle =0\) for all \(l=1,2,\ldots \) , \(\mathbf{W}_{j}\mathbf{W}_{l}^{T}=0\) and \(\mathbf{W}_{j}^{T}\mathbf{W}_{l}=0\) for \(j\neq l\).

Since Ŷ is a minimizer of minimization problem (14), we have

$$\begin{aligned} \Vert Y_{[k]} \Vert _{p}+ \Vert Y_{c} \Vert _{p}-\alpha \Vert Y \Vert _{q}={}& \Vert Y \Vert _{p}-\alpha \Vert Y \Vert _{q} \\ \geq {}& \Vert \hat{Y} \Vert _{p}-\alpha \Vert \hat{Y} \Vert _{q} \\ \geq {}& \Vert Y-\mathbf{W} \Vert _{p}-\alpha \Vert Y- \mathbf{W} \Vert _{q} \\ \geq {}& \bigl\Vert Y_{[k]}+Y_{c}-( \mathbf{W}_{0}+\mathbf{W}_{c}) \bigr\Vert _{p} - \alpha ( \bigl\Vert Y-( \mathbf{W}_{0}+\mathbf{W}_{c}) \bigr\Vert _{q} \\ \geq {}& \Vert Y_{[k]}-\mathbf{W}_{c} \Vert _{p}- \Vert Y_{c}-\mathbf{W}_{0} \Vert _{p}- \alpha \Vert Y-\mathbf{W}_{0} \Vert _{q}-\alpha \Vert \mathbf{W}_{c} \Vert _{q} \\ \geq {}& \Vert Y_{[k]} \Vert _{p}+ \Vert \mathbf{W}_{c} \Vert _{p}- \Vert Y_{c} \Vert _{p}- \Vert \mathbf{W}_{0} \Vert _{p}-\alpha \Vert Y \Vert _{q} \\ &{}-\alpha \Vert \mathbf{W}_{0} \Vert _{q}-\alpha \Vert \mathbf{W}_{c} \Vert _{q}. \end{aligned}$$
(39)

Therefore,

$$\begin{aligned} \Vert \mathbf{W}_{c} \Vert _{p}-\alpha \Vert \mathbf{W}_{c} \Vert _{q}\leq \Vert \mathbf{W}_{0} \Vert _{p}+ \alpha \Vert \mathbf{W}_{0} \Vert _{q}+2 \Vert Y_{c} \Vert _{p}. \end{aligned}$$
(40)

Now, we will set a new upper bound for \(\sum_{l\geq 2}\|\mathbf{W}_{l}\|_{F}\). Using Lemma 6.5, for \(l\geq 2\), \((l-1)\mu k< j\leq l\mu k\) and \((l-2)\mu k< i\leq (l-1)\mu k\), we have \(\sigma _{j}^{(l)}(\mathbf{W}_{22})\leq \min_{i} \sigma _{i}^{(l-1)}(\mathbf{W}_{22})\). Then,

$$ \sigma _{j}^{(l)}(\mathbf{W}_{22}) \leq \min_{i}\sigma _{i}^{(l-1)}( \mathbf{W}_{22})\leq \frac{ \Vert \mathbf{W}_{l-1} \Vert _{p}-\alpha \Vert \mathbf{W}_{l-1} \Vert _{q}}{(\mu k)^{\frac{1}{p}}-\alpha (\mu k)^{\frac{1}{q}}}. $$
(41)

Then, we have

$$\begin{aligned} \Vert \mathbf{W}_{l} \Vert _{2}&=\sum _{j} \bigl(\sigma _{j}^{(l)}( \mathbf{W}_{22})^{2} \bigr)^{\frac{1}{2}} \\ &\leq (\mu k)^{\frac{1}{2}} \frac{ \Vert \mathbf{W}_{l-1} \Vert _{p}-\alpha \Vert \mathbf{W}_{l-1} \Vert _{q}}{(\mu k)^{\frac{1}{p}}-\alpha (\mu k)^{\frac{1}{q}}} \\ &= \frac{ \Vert \mathbf{W}_{l-1} \Vert _{p}-\alpha \Vert \mathbf{W}_{l-1} \Vert _{q}}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$

Hence, it follows that

$$\begin{aligned} \sum_{l\geq 2} \Vert \mathbf{W}_{l} \Vert _{2}&\leq \frac{ \sum_{l\geq 2} ( \Vert \mathbf{W}_{l-1} \Vert _{p}-\alpha \Vert \mathbf{W}_{l-1} \Vert _{q} )}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}} \\ &= \frac{ \sum_{l\geq 1} \Vert \mathbf{W}_{l} \Vert _{p}-\alpha \sum_{i\geq 1} \Vert \mathbf{W}_{l} \Vert _{q}}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$
(42)

We note that

$$\begin{aligned} \sum_{l\geq 1} \Vert \mathbf{W}_{l} \Vert _{p}\leq \Vert \mathbf{W}_{c} \Vert _{q} \quad \text{and}\quad \Vert \mathbf{W}_{c} \Vert _{q}\leq \sum_{l \geq 1} \Vert \mathbf{W}_{l} \Vert _{q}. \end{aligned}$$
(43)

Using (43) in (42), we have

$$\begin{aligned} \sum_{l\geq 2} \Vert \mathbf{W}_{l} \Vert _{2}&\leq \frac{ \Vert \mathbf{W}_{c} \Vert _{p}-\alpha \Vert \mathbf{W}_{c} \Vert _{q}}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$
(44)

Using (40) in (44), we have

$$\begin{aligned} \sum_{l\geq 2} \Vert \mathbf{W}_{l} \Vert _{2}&\leq \frac{ \Vert \mathbf{W}_{0} \Vert _{p}+\alpha \Vert \mathbf{W}_{0} \Vert _{q}+2 \Vert Y_{c} \Vert _{p}}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}} \\ &= \frac{ ((2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}} ) \Vert \mathbf{W}_{0}+\mathbf{W}_{1} \Vert _{2}+2 \Vert Y_{c} \Vert _{p}}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}}. \end{aligned}$$
(45)

By the feasibility of Y and Ŷ, we have

$$\begin{aligned} \bigl\Vert \mathcal{A}(\mathbf{W}) \bigr\Vert _{2}&= \Vert \mathcal{A} {Y-\hat{Y}} \Vert _{2} \\ &= \bigl\Vert ( \mathcal{A}Y-D)-(\mathcal{A}\hat{Y}-D) \bigr\Vert _{2} \\ &\leq \Vert \mathcal{A}Y-D \Vert _{2}+ \Vert \mathcal{A} \hat{Y}-D \Vert _{2} \\ &\leq 2\eta . \end{aligned}$$
(46)

Using the definition of the matrix-RIP condition, we have

$$\begin{aligned} \bigl\Vert \mathcal{A}(\mathbf{W}) \bigr\Vert _{2}&= \vert \vert \mathcal{A} ({ \mathbf{W}_{0}}+{\mathbf{W}_{1}} )+\sum_{l\geq 2}\mathcal{A}({ \mathbf{W}_{l}}) \vert \vert _{2} \\ &\geq \bigl\Vert \mathcal{A} ({\mathbf{W}_{0}}+{ \mathbf{W}_{1}} ) \bigr\Vert _{2}- \sum _{l\geq 2}\|\mathcal{A}({\mathbf{W}_{l}}) \|_{2} \\ &\geq ({1-\delta _{(\mu +2)k}}) \Vert \mathbf{W}_{0}+ \mathbf{W}_{1} \Vert _{2}-({1+ \delta _{\mu k}})\sum _{l\geq 2} \Vert \mathbf{W}_{l} \Vert _{2}. \end{aligned}$$
(47)

Using (45) in (47), we have

$$\begin{aligned} \bigl\Vert \mathcal{A}(\mathbf{W}) \bigr\Vert _{2}\geq{} &({1-\delta _{(\mu +2)k}}) \Vert \mathbf{W}_{0}+ \mathbf{W}_{1} \Vert _{2}-({1+\delta _{\mu k}}) \\ &{}\times \frac{ ((2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}} ) \Vert \mathbf{W}_{0}+\mathbf{W}_{1} \Vert _{2}+2 \Vert Y_{c} \Vert _{p}}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}} \\ ={}& \biggl({1-\delta _{(\mu +2)k}}- \frac{({1+\delta _{\mu k}})}{\mathcal{C}} \biggr) \Vert \mathbf{W}_{0}+ \mathbf{W}_{1} \Vert _{2} \\ &{}- \frac{2({1+\delta _{\mu k}})}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y_{c} \Vert _{p}. \end{aligned}$$
(48)

Therefore, if \(\delta _{\mu k}+\mathcal{C}\delta _{(\mu +2)k}<\mathcal{C}-1 \), then it yields that

$$\begin{aligned} \Vert \mathbf{W}_{0}+\mathbf{W}_{1} \Vert _{2}\leq {}& \frac{1}{{1-\delta _{(\mu +2)k}}-\frac{({1+\delta _{\mu k}})}{\mathcal{C}}} \bigl\Vert \mathcal{A}(\mathbf{W}) \bigr\Vert _{2}+ \frac{2(1+\delta _{\mu k})}{{1-\delta _{(\mu +2)k}}-\frac{({1+\delta _{\mu k}})}{\mathcal{C}}} \\ &{}\times \frac{1}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y_{c} \Vert _{p} \\ \leq {}& \frac{2\mathcal{C}\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \\ &{}+ \frac{2\mathcal{C}(1+\delta _{\mu k}) ((\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}} )^{-1}}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \Vert Y_{c} \Vert _{p}. \end{aligned}$$
(49)

Using (49) in (45), we have

$$\begin{aligned} \sum_{l\geq 2} \Vert \mathbf{W}_{l} \Vert _{2}\leq {}&\frac{1}{\mathcal{C}} \biggl[ \frac{2\mathcal{C}\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} + \frac{2\mathcal{(}1+\delta _{\mu k}) ((\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}} )^{-1}}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \Vert Y_{c} \Vert _{p} \biggr] \\ &{}+ \frac{2}{(\mu k)^{\frac{1}{p}-\frac{1}{2}}-\alpha (\mu k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y_{c} \Vert _{p} \\ &{}\leq \frac{2\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \\ &{}+ \frac{2(1-\delta _{(\mu +2)k})((2 k)^{\frac{1}{p}-\frac{1}{2}}+\alpha ( 2 k)^{\frac{1}{q}-\frac{1}{2}})^{-1}}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \Vert Y_{c} \Vert _{p}. \end{aligned}$$
(50)

We know that

$$ \Vert \mathbf{W} \Vert _{2}\leq \Vert \mathbf{W}_{0}+\mathbf{W}_{1} \Vert _{2}+\sum _{l \geq 2} \Vert \mathbf{W}_{l} \Vert _{2}. $$
(51)

Combining (45), (49), and (51), we have

$$\begin{aligned} \Vert \mathbf{W} \Vert _{2}\leq{} & \frac{2\mathcal{C}\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}+ \frac{2(1+\delta _{\mu k}) ((2 k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2 k)^{\frac{1}{q}-\frac{1}{2}} )^{-1}}{{\mathcal{-}\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \Vert Y_{c} \Vert _{p} \\ &{}+ \frac{2\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}+ \frac{2(1-\delta _{(\mu +2)k})(( 2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha ( 2k )^{\frac{1}{q}-\frac{1}{2}})^{-1}}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \Vert Y_{c} \Vert _{p} \\ \leq{}& \frac{2(\mathcal{C}+1)\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}+ \frac{2({2-\delta _{(\mu +2)k}}+\delta _{\mu k})((2 k)^{\frac{1}{p}-\frac{1}{2}}+\alpha ( 2k)^{\frac{1}{q}-\frac{1}{2}})^{-1}}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}} \Vert Y_{c} \Vert _{p} \\ ={}&\mathbf{C}_{3}\eta +\mathbf{C}_{4} \frac{1}{(2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y_{c} \Vert _{p} \\ ={}&\mathbf{C}_{3}\eta +\mathbf{C}_{4} \frac{1}{(2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y-Y_{k} \Vert _{p}, \end{aligned}$$
(52)

where \(\mathbf{C}_{3}= \frac {2(\mathcal{C}+1)\eta}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}\) and \(\mathbf{C}_{4}= \frac {2({2-\delta _{(\mu +2)k}}+\delta _{\mu k})}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}\). □

In the following corollary, we find the exact rank-minimization recovery result of the weighted \(l_{p}-l_{q}\) minimization model (14) using the matrix-RIP condition when the noise vector \(\eta = 0\).

Corollary 6.6

(Noiseless Recovery)

If all the conditions in Theorem 6.4hold and the noise vector \(\eta =0\) then any solution Ŷ of the matrix-weighted \(l_{p}-l_{q}\) minimization model (14) obeys

$$ \Vert Y-\hat{Y} \Vert _{2}\leq \mathbf{C}_{4} \frac{1}{(2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y-Y_{k} \Vert _{p}, $$

where \(\mathbf{C}_{4}= \frac {2({2-\delta _{(\mu +2)k}}+\delta _{\mu k})}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}\). In particular, if Y is the rank k-matrix then the recovery result is exact.

Proof

By putting the noise vector \(\eta =0\) in (52), we have

$$ \Vert \mathbf{W} \Vert _{2}\leq \mathbf{C}_{4} \frac{1}{(2k)^{\frac{1}{p}-\frac{1}{2}}+\alpha (2k)^{\frac{1}{q}-\frac{1}{2}}} \Vert Y-Y_{k} \Vert _{p}, $$

\(\mathbf{C}_{4}= \frac {2({2-\delta _{(\mu +2)k}}+\delta _{\mu k})}{{\mathcal{C}-\mathcal{C}\delta _{(\mu +2)k}}-1-\delta _{\mu k}}\). Also, if the vector Y is the rank k-matrix then we find the exact recovery result, i.e., \(\mathbf{W}=0\). □

7 Conclusion

In this work, we have considered the problems of the recovery of block-sparse signals and rank minimization. We have proposed a nonconvex, nonsmooth, and non-Lipschitz weighted \(l_{p}-l_{q}\) (\(0< p\leq 1\), \(1< q\leq 2\); \(0\leq \alpha \leq 1\)) norm as a nonconvex metric to deal with the problems of block-sparse signals and rank minimization. We obtained theoretical bounds for our proposed model for block-sparse signals recovery and rank minimization when measurements are degraded by noise. Also, we obtained the exact recovery results for block-sparse signals and rank minimization. This is faster than the convergence of a conjugate Fourier series.

Availability of data and materials

Not applicable.

References

  1. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Candès, E.J., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci., Sér. 346, 589–592 (2008)

    MathSciNet  MATH  Google Scholar 

  5. Chen, S., Donoho, D., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  6. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Sci. Comput. 24, 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gribnoval, R., Nielsen, M.: Sparse representations in unions of bases. IEEE Trans. Inf. Theory 49, 3320–3325 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  8. Majumdar, A., Ward, R.: Compressed sensing of color images. Signal Process. 90(12), 3122–3127 (2010)

    Article  MATH  Google Scholar 

  9. Malioutov, D., Cetin, M., Willsky, A.: Sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 53(8), 3010–3022 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Parvaresh, F., Vikalo, H., Misra, S., Hassibi, B.: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275–285 (2008)

    Article  Google Scholar 

  11. Mishali, M., Eldar, Y.C.: Reduce and boost: recovering arbitrary sets of jointly sparse vectors. IEEE Trans. Signal Process. 56(10), 4692–4702 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. Vidal, R., Ma, Y.: A unified algebraic approach to 2-D and 3-D motion segmentation and estimation. J. Math. Imaging Vis. 25(3), 403–421 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  13. Erickson, S., Sabatti, C.: Empirical Bayes estimation of a sparse vector of gene expression changes. Stat. Appl. Genet. Mol. Biol. 4, 22 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  14. Cotter, S., Rao, B.: Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 50(3), 374–377 (2002)

    Article  Google Scholar 

  15. Huang, S., Yang, Y., Yang, D.: Class specific sparse representation for classification. Signal Process. 116, 38–42 (2015)

    Article  Google Scholar 

  16. Liu, Y., Wan, Q.: Enhanced compressive wideband frequency spectrum sensing for dynamic spectrum access. EURASIP J. Adv. Signal Process. 2012(177), 1–11 (2012)

    Google Scholar 

  17. Chen, W., Li, Y.: The high order RIP condition for signal recovery. J. Comput. Math. 37(1), 61–75 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  18. Gao, Y., Ma, M.: A new bound on the block restricted isometry constant in compressed sensing. J. Inequal. Appl. 2017, 174 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Wang, Y., Wang, J., Xu, Z.: Restricted p-isometry properties of nonconvex blocksparse compressed sensing. Signal Process. 104, 188–196 (2014)

    Article  Google Scholar 

  20. Cai, Y.: Weighted \(l_{p}-l_{1}\) minimization methods for block sparse recovery and rank minimization. Anal. Appl. (2020)

  21. Wen, J., Zhou, Z., Liu, Z., Lai, M., Tang, X.: Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit. Appl. Comput. Harmon. Anal. 47(3), 948–974 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  22. Lin, L., Li, S.: Block sparse recovery via mixed l2/l1 minimization. Acta Math. Sin. 29(7), 1401–1412 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. Eldar, Y., Mishali, M.: Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory 55(11), 5302–5316 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9(1), 137–154 (1992)

    Article  Google Scholar 

  26. Basri, R., Jacobs, D.: Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003)

    Article  Google Scholar 

  27. Abernethy, J., Bach, F., Evgeniou, T., Vert, J.P.: A new approach to collaborative filtering: operator estimation with spectral regularization. J. Mach. Learn. Res. 10, 803–826 (2009)

    MATH  Google Scholar 

  28. Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proc. Int. Conf. Mach. Learn., pp. 713–719 (2005)

    Google Scholar 

  29. Mesbahi, M., Papavassilopoulos, G.P.: On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Trans. Autom. Control 42(2), 239–243 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  30. Srebro, N.: Learning with matrix factorizations. Ph.D. dissertation, Massachusetts Inst. Technol, Cambridge, MA, USA (2004)

  31. Gross, D., Liu, Y.K., Flammia, S.T., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010)

    Article  Google Scholar 

  32. Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31(3), 1235–1256 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  33. Amit, Y., Fink, M., Srebro, N., Ullman, S.: Uncovering shared structures in multiclass classification. In: Proc. 24th Int. Conf. Mach. Learn., pp. 17–24. (2007)

  34. Morita, T., Kanade, T.: A sequential factorization method for recovering shape and motion from image streams. IEEE Trans. Pattern Anal. Mach. Intell. 19(8), 858–867 (1997)

    Article  Google Scholar 

  35. Zhang, M., Huang, Z.H., Zhang, Y.: Restricted p-isometry properties of nonconvex matrix recovery. IEEE Trans. Inf. Theory 59(7), 4316–4323 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  36. Ma, T.H., Lou, Y., Huang, T.Z.: Truncated \(l_{1-2}\) models for sparse recovery and rank minimization. SIAM J. Imaging Sci. 10(3), 1346–1380 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  37. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  38. Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proc. IEEE Amer. Control Conf, pp. 4734–4739 (2001)

    Google Scholar 

  39. Candès, E.J., Plan, Y.: Tight oracle bounds for low-rank recovery from a minimal number of random measurements. IEEE Trans. Inf. Theory 57(4), 2342–2359 (2011)

    Article  MATH  Google Scholar 

  40. Kong, L.C., Xiu, N.H.: Exact low-rank matrix recovery via non-convex \(M_{p}\)-minimization, Optimization. – Online (2011)

  41. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(2), 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  42. Esser, E., Lou, Y., Xin, J.: A method for finding structured sparse solutions to nonnegative least squares problems with applications. SIAM J. Imaging Sci. 6, 2010–2046 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  43. Yin, P., Lou, Y., He, Q., Xin, J.: Minimization of \(l_{1-2}\) for compressed sensing. SIAM J. Sci. Comput. 37, 536–563 (2015)

    Article  MathSciNet  Google Scholar 

  44. Wang, D., Zhang, Z.: Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing. Circuits Syst. Signal Process. 36, 4326–4353 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  45. Zhao, Y., He, X., Huang, T., Huang, J.: Smoothing inertial projection neural network for minimization \(L_{p-q}\) in sparse signal reconstruction. Neural Netw. 99, 33–41 (2018)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The first author is thankful to SERB, Government of India, New Delhi for support to this work through the project EMR/2016/002003.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

HKN framed the problems. HKN and SY carried out the results and wrote the manuscript. Both the authors contributed equally. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to H. K. Nigam.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nigam, H.K., Yadav, S. Block-sparse recovery and rank minimization using a weighted \(l_{p}-l_{q}\) model. J Inequal Appl 2023, 29 (2023). https://doi.org/10.1186/s13660-023-02932-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-02932-2

MSC

Keywords