A new bound on the block restricted isometry constant in compressed sensing

This paper focuses on the sufficient condition of block sparse recovery with the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$l_{2}/l_{1}$\end{document}l2/l1-minimization. We show that if the measurement matrix satisfies the block restricted isometry property with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\delta_{2s|\mathcal{I}}< 0.6246$\end{document}δ2s|I<0.6246, then every block s-sparse signal can be exactly recovered via the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$l_{2}/l_{1}$\end{document}l2/l1-minimization approach in the noiseless case and is stably recovered in the noisy measurement case. The result improves the bound on the block restricted isometry constant \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\delta_{2s|\mathcal {I}}$\end{document}δ2s|I of Lin and Li (Acta Math. Sin. Engl. Ser. 29(7):1401-1412, 2013).


Introduction
Compressed sensing [-] is a scheme which shows that some signals can be reconstructed from fewer measurements compared to the classical Nyquist-Shannon sampling method. This effective sampling method has a number of potential applications in signal processing, as well as other areas of science and technology. Its essential model is where x  denotes the number of non-zero entries of the vector x, an s-sparse vector x ∈ R N is defined by x  ≤ s N . However, the l  -minimization () is a nonconvex and NP-hard optimization problem [] and thus is computationally infeasible. To overcome this problem, one proposed the l  -minimization [, -].
where x  = N i= |x i |. Candès [] proved that the solutions to () are equivalent to those of () provided that the measurement matrices satisfy the restricted isometry property (RIP) [, ] with some definite restricted isometry constant (RIC) δ s ∈ (, ), here δ s is defined as the smallest constant satisfying for any s-sparse vectors x ∈ R N .
However, the standard compressed sensing only considers the sparsity of the recovered signal, but it does not take into account any further structure. In many practical applications, for example, DNA microarrays [], face recognition [], color imaging [], image annotation [], multi-response linear regression [], etc., the non-zero entries of sparse signal can be aligned or classified into blocks, which means that they appear in regions in a regular order instead of arbitrarily spread throughout the vector. These signals are called the block sparse signals and has attracted considerable interests; see [-] for more information. Suppose , which are of length d  , d  , . . . , d m , respectively, that is, To recover a block sparse signal, similar to the standard l  -minimization, one seeks the sparsest block sparse vector via the following l  /l  -minimization [, , ]: But the l  /l  -minimization problem is also NP-hard. It is natural to use the l  /l minimization to replace the l  /l  -minimization [, , , ]. where To characterize the performance of this method, Eldar and Mishali [] proposed the block restricted isometry property (block RIP).
Definition  (Block RIP) Given a matrix A ∈ R n×N , for every block s-sparse x ∈ R N over I = {d  , d  , . . . , d m }, there exists a positive constant  < δ s|I < , such that then the matrix A satisfies the s-order block RIP over I, and the smallest constant δ s|I satisfying the above inequality () is called the block RIC of A.
Obviously, the block RIP is an extension of the standard RIP, but it is a less stringent requirement comparing to the standard RIP [, ]. Eldar et al. [] proved that the l  /l minimization can exactly recover any block s-sparse signal when the measurement matrices A satisfy the block RIP with δ s|I < .. The block RIC can be improved, for example, Lin and Li [] improved the bound to δ s|I < ., and established another sufficient condition δ s|I < . for exact recovery. So far, to the best of our knowledge, there is no paper that further focuses on improvement of the block RIC. As mentioned in [, , ], like RIC, there are several benefits for improving the bound on δ s|I . First, it allows more measurement matrices to be used in compressed sensing. Secondly, for the same matrix A, it allows for recovering a block sparse signal with more non-zero entries. Furthermore, it gives better error estimation in a general problem to recover noisy compressible signals. Therefore, this paper addresses improvement of the block RIC, we consider the following minimization for the inaccurate measurement, y = Ax + e with e  ≤ : Our main result is stated in the following theorem.
Theorem  Suppose that the s block RIC of the matrix A ∈ R n×N satisfies If x * is a solution to (), then there exist positive constants C  , D  and C  , D  , and we have where the constants C  , D  and C  , D  depend only on δ s|I , written as and σ s (x) , denotes the best block s-term approximation error of x ∈ R N in l  /l  norm, i.e., Corollary  Under the same assumptions as in Theorem , suppose that e =  and x is block s-sparse, then x can be exactly recovered via the l  /l  -minimization ().
The remainder of the paper is organized as follows. In Section , we introduce the l , robust block NSP that can characterize the stability and robustness of the l  -minimization with noisy measurement (). In Section , we show that the condition () can conclude the l , robust block NSP, which means to implement the proof of our main result. Section  is for our conclusions. The last section is an appendix including an important lemma.

Block null space property
Although null space property (NSP) is a very important concept in approximation theory [, ], it provides a necessary and sufficient condition of the existence and uniqueness of the solution to the l  -minimization (), so NSP has drawn extensive attention for studying the characterization of measurement matrix in compressed sensing []. It is natural to extend the classic NSP to the block sparse case. For this purpose, we introduce some notations. Suppose that x ∈ R N is an m-block signal, whose structure is like (), we set S ⊂ {, , . . . , m} and by S C we mean the complement of the set S with respect to {, , . . . , m}, i.e., S C = {, , . . . , m} \ S. Let x S denote the vector equal to x on a block index set S and zero elsewhere, then x = x S + x S C . Here, to investigate the solution to the model (), we introduce the l , robust block NSP, for more information on other forms of block NSP, we refer the reader to [, ].
Definition  (l , robust block NSP) Given a matrix A ∈ R n×N , for any set S ⊂ {, , . . . , m} with card(S) ≤ s and for all v ∈ R N , if there exist constants  < τ <  and γ > , such that then the matrix A is said to satisfy the l , robust block NSP of order s with τ and γ .
Our main result relies heavily on this definition. A natural question is what relationship between this robust block NSP and the block RIP. Indeed, from the next section, we shall see that the block RIP with condition () can lead to the l , robust block NSP, that is, the l , robust block NSP is weaker than the block RIP to some extent. The spirit of this definition is first to imply the following theorem.
Theorem  For any set S ⊂ {, , . . . , m} with card(S) ≤ s, the matrix A ∈ R n×N satisfies the l , robust block NSP of order s with constants  < τ <  and γ > , then, for all vectors x, z ∈ R N , Clearly, for an m-block vector x ∈ R N is like (), l  -norm x  can be rewritten as Using () once again, we derive which is the desired inequality.
The l , robust block NSP is vital to characterize the stability and robustness of the l  /l minimization with noisy measurement (), which is the following result.
Theorem  Suppose that the matrix A ∈ R n×N satisfies the l , robust block NSP of order s with constants  < τ <  and γ > , if x * is a solution to the l  /l  -minimization with y = Ax + e and e  ≤ , then there exist positive constants C  , D  and C  , D  , and we have Proof In Theorem , by S denote an index set of s largest l  -norm terms out of m blocks in x, () is a direct corollary of Theorem  if we notice that x S C , = σ s (x) , and A(xx * )  ≤  . Equation () is a result of Theorem  for q =  in [].

Proof of the main result
From Theorem , we see that the inequalities () and () are the same as in () and () up to constants, respectively. This means that we shall only show that the condition () implies the l , robust block NSP for implementing the proof of our main result.
Theorem  Suppose that the s block RIC of the matrix A ∈ R n×N obeys (), then the matrix A satisfies the l , robust block NSP of order s with constants  < τ <  and γ > , where Proof The proof relies on a technique introduced in []. Suppose that the matrix A has the block RIP with δ s|I . Let v be divided into m blocks whose structure is like (). Let S =: S  be an index set of s largest l  -norm terms out of m blocks in v. We begin by dividing S C into subsets of size s, S  is the first s largest l  -norm terms in S C , S  is the next s largest l  -norm terms in S C , etc. Since the vector v S is block s-sparse, according to the block RIP, for |t| ≤ δ s|I , we can write We are going to establish that, for any j ≥ , To do so, we normalize the vectors v S and v S j by setting u =: v S / v S  and w =: v S j / v S j  . Then, for α, β > , we write By the block RIP, on the one hand, we have Making the choice α = (δ s|I +t) On the other hand, we also have Making the choice α = (δ s|I -t) Combining () with () yields the desired inequality (). Next, noticing that According to Lemma A. and the setting of S j , we have Substituting () into () and noticing (), we also have Let then it is not difficult to conclude that f (t) has a maximum point t = -δ  s|I in the closed interval [-δ s|I , δ s|I ], so for |t| ≤ δ s|I , we have Here, we require  δ  s|Iδ s|I > , which implies δ  s|I <   , that is, δ s|I <  √  ≈ ..
Remark  Substituting () into () and (), we can obtain the constants in Theorem .
Remark  Our result improves that of [], that is, the bound of block RIC δ s|I is improved from . to ..

Conclusions
In this paper, we gave a new bound on the block RIC δ s|I < ., under this bound, every block s-sparse signal can be exactly recovered via the l  /l  -minimization approach in the noiseless case and is stably recovered in the noisy measurement case.