- Research
- Open access
- Published:
Estimation of q for \(\ell _{q}\)-minimization in signal recovery with tight frame
Journal of Inequalities and Applications volume 2023, Article number: 156 (2023)
Abstract
This study aims to reconstruct signals that are sparse with a tight frame from undersampled data by using the \(\ell _{q}\)-minimization method. This problem can be cast as a \(\ell _{q}\)-minimization problem with a tight frame subjected to an undersampled measurement with a known noise bound. We proved that if the measurement matrix satisfies the restricted isometry property with \(\delta _{2s}\leq 1/2\), there exists a value \(q_{0}\) such that for any \(q\in (0,q_{0}]\), any signal that is s-sparse with a tight frame can be robustly recovered to the true signal. We estimated \(q_{0}\) as \(q_{0} = 2/3\) in the case of \(\delta _{2s}\leq 1/2\) and discussed that the value of \(q_{0}\) can be much higher. We also showed that when \(\delta _{2s}\leq 0.3317\), for any \(q\in (0,1]\), robust recovery for signals via \(\ell _{q}\)-minimization holds, which is consistent with the case of \(\ell _{q}\)-minimization without a tight frame.
1 Introduction
Sparse representation and sparse signal recovery are derived from signal and image processing [14, 15, 26] and have been extended to other areas, such as sampling theory [21, 27], model identification [23, 36], and sensor networks [20, 30, 32]. Most of these applications search for sparse signals. Here, a signal or vector x is considered s-sparse if \(\|x\|_{0}\leq s\) and \(\|\cdot \|_{0}\) are the \(\ell _{0}\)-norm, which counts the nonzero entries of x. Compressed sensing is a sparse signal recovery theory that searches for the sparsest signal in an underdetermined linear system \(Ax = y\), where \(A\in \mathbb{R}^{n\times N} (n\ll N)\) is the so-called measurement matrix, which is usually full rank, whereas \(y\in \mathbb{R}^{n}\) is the given measurement vector. This procedure can be cast as a \(\ell _{0}\)-minimization problem. However, the \(\ell _{0}\)-minimization problem is NP-hard [24], some of which can be extended to \(\ell _{1}\)-minimization, replacing \(\|x\|_{0}\) with \(\|x\|_{1}\) in \(\ell _{0}\)-minimization. The \(\ell _{1}\)-minimization seeks a slightly sparse solution for \(y=Ax\). Donoho, Candès, Romberg, and Tao specified the conditions in [4, 5] that solutions of \(\ell _{1}\)-minimization are the solutions of \(\ell _{0}\)-minimization. Furthermore, \(\ell _{1}\)-minimization is a linear programming problem that can be solved using certain algorithms [6, 9, 25, 31, 33].
In some other situations, signal x is not sparse itself, but it is sparse under some bases [29] (such as a Fourier base or wavelet base), frames [11, 12], or redundant dictionaries [10, 28]. In this study, signal x that was sparse in a tight frame was considered. A tight frame is defined as follows.
Definition 1
(Tight frame) [7] Vectors \(D_{1},D_{2},\ldots ,D_{d}\in \mathbb{R}^{N}\) are said to be a tight frame if they satisfy
Sometimes, we also say that the matrix \(D = (D_{1},D_{2},\ldots ,D_{d})\in \mathbb{R}^{N\times d}\) is a tight frame. For some signal x, \(D^{*}x\) is either sparse or approximately sparse. In a noisy setting, the sparsity-seeking question can be expressed as
where \(D^{*}\) is the conjugate transpose of D and ϵ is the energy of the known errors. Its \(\ell _{1}\)-minimization problem is available accordingly [1, 8, 16, 18],
The compliance of the solutions with \(\ell _{0}\) and \(\ell _{1}\)-minimizations has a sufficient condition with a coherent tight frame, which is said to be a restricted isometric property adapted to tight frame D (D-RIP).
Definition 2
(D-RIP) [3] The measurement matrix A satisfies the restricted isometric property adapted to tight frame D with order s if there exists a positive number \(\delta _{s} \in (0,1)\) such that
holds for \(\forall x\in \sum_{s}\), where \(\sum_{s} = \{x : \|x\|_{0}\leq s\}\). Here, \(\delta _{s}\) is the restricted isometric constant (RIC) of order s.
Let \(v_{\max (s)}\) be an operator that returns the s largest coefficients of \(v\in \mathbb{R}^{N}\) in magnitude,
If in D-RIP \(D=Id\), where Id is the identity matrix, then D-RIP is the traditional RIP. For the traditional RIP, Cai and Zhang provided a sharp bound for \(\delta _{2s}\) in [2] as \(\delta _{2s} < \sqrt{2}/2\). For D-RIP, Candès, Eldar et al. showed that Gaussian, sub-Gaussian, and boundary matrices satisfy the D-RIP with a high probability in [3]. They also proved that when \(\delta _{2s}<0.08\), the solution of the \(\ell _{1}\)-minimization satisfies
where \(C_{0} \) and \(C_{1}\) are constants, x̂ is the recovered signal and x is the true signal. As shown, the upper boundary of \(\|\hat{x}-x\|_{2}\) is controlled by \(\|D^{*}x-(D^{*}x)_{\max (s)}\|_{1}\) and ϵ. If \(D^{*}x\) is s-sparse or approximately s-sparse and ϵ is sufficiently small, the error between the recovered signal and the true signal can be regulated within an acceptable range.
The dynamic relation between \(\ell _{0}\) and \(\ell _{1}\)-minimization is not clear. Thus, we studied \(\ell _{q}\)-minimization with \(0< q<1\) [18, 19, 22]. The \(\ell _{q}\)-minimization problem is
When \(q\rightarrow 0\), \(\ell _{q}\)-minimization approximates \(\ell _{0}\)-minimization, while if \(q\rightarrow 1\), \(\ell _{q}\)-minimization approximates \(\ell _{1}\)-minimization.
In general, the recovery condition by the \(\ell _{q}\)-minimization (\(0< q < 1\)) is less restrictive than the \(\ell _{1}\)-minimization. In [34], Zhang and Li proved that if the sensing matrix A satisfies the D-RIP condition \(\delta _{2s}< \sqrt{2}/2\), then all signals x with s-sparse with a tight frame can be recovered exactly via the constrained \(\ell _{1}\)-minimization. For \(\ell _{q}\)-minimization with tight frame, in [17], Li and Lin showed that for a tight frame D, if \(\delta _{2s}<1/2\), then there exists \(q_{0}=q_{0}(\delta _{2k})\in (0,1]\), such that for any \(q\in (0,q_{0} )\), the recovered signal x̂ via \(\ell _{q}\)-minimization and the true signal x satisfy
where \(C_{0}\) and \(C_{1}\) are constants that depend on \(\delta _{2s}\) and q. However, this result does not provide the exact value for \(q_{0}\). Subsequently, the D-RIP conditions for \(\ell _{q}\)-minimization with a tight frame are improved. In [35], Zhang and Li showed that if the sensing matrix A satisfies the D-RIP with
where \(\eta \in (1-q,1-\frac{q}{2})\) is the only positive solution of the equation
then any s-sparse signal x with a tight frame can be exactly and stably recovered via \(\ell _{q}\)-minimization in noiseless and noisy cases, respectively. D-RIP condition (4) for \(\ell _{q}\) minimization is less restrictive than \(\delta _{2s}< \sqrt{2}/2\) for \(\ell _{1}\) minimization. If let \(p=1/2\), we have \(\delta _{2s} < 0.859\) by (4), which is less restrictive than \(\delta _{2s}< \sqrt{2}/2\) for \(\ell _{1}\)-minimization.
We provide an example to illustrate that if \(\delta _{2s}> \sqrt{2}/2\), \(\ell _{1}\)-minimization may fail, but \(\ell _{q}\)-minimization works. We construct a measurement matrix \(A \in \mathbb{R}^{2 \times 3} \), and a tight frame \(D \in \mathbb{R}^{3 \times 5} \), as follows
We can calculate that \(\delta _{2} = 0.75 > \sqrt{2}/2\). Vectors \(x^{(1)} = (2,0,0)^{T}\) and \(x^{(2)} = (0,1,1)^{T}\) have the same observed vector, namely \(Ax^{(1)} = Ax^{(2)} \). We have
\(D^{*} x^{(1)}\) and \(D^{*}x^{(2)}\) have the same \(\ell _{1}\)-norm, which means that signal recovery for \(x^{(1)}\) through \(\ell _{1}\)-minimization fails. \(\ell _{q}\)-minimization is necessary in this case. The general solution of the equations \(Ax=Ax^{(1)}\) is \(x = (2-2c,c,c)^{T}\), where c is arbitrary real number. We can derive
where the first inequality uses the conclusion: if \(a>0\), \(b>0\) and \(0< q<1\), then \((a+b)^{q} \leq a^{q} +b^{q}\). Hence, we have \(\|D^{*} x^{(1)} \|_{q} < \|D^{*}x\|_{q}\) for \(0< q <1\) and any solution x of the equations \(Ax=Ax^{(1)}\). Therefore, \(\ell _{q}\)-minimization can recover signal \(x^{(1)}\).
This study examines signal recovery with a tight frame via \(\ell _{q}\)-minimization for the case of a restricted isometry constant \(\delta _{2s} < 1/2\). The main contribution shows not only the existence of \(q_{0}\), such that for any \(q \in (0,q_{0}]\), any s-sparse signal with a tight frame can be recovered via \(\ell _{q}\)-minimization, but also the exact value \(q_{0} = 2/3\). A computer also demonstrated that the value of \(q_{0}\) can be increased to \(q_{0} = 0.97\).
The remainder of this paper is organized as follows. In Sect. 2, some useful lemmas and their proofs are outlined, and Sect. 3 presents the main theorems. We provide the proofs of these main theorems in Sect. 4. Conclusions are presented in Sect. 5.
Notations: Given a signal \(x= (x_{1},x_{2},\ldots ,x_{N} )^{T}\), the \(\ell _{0}\)-norm is the number of its nonzero entries, that is, \(\|x\|_{0}=Card ( supp(x) )\). Here, \(Card(\cdot )\) is the cardinality of a vector and \(supp(x)\) is the support set of x. The \(\ell _{1}\)-norm of vector x is the sum of the absolute values of its entries, that is, \(\|x\|_{1}=\sum_{i\geq 1}|x_{i}|\). We can define its \(\ell _{q}\)-norm with \(0< q<1\) as \(\|x\|_{q}= (\sum_{i\geq 1}|x_{i}|^{q} )^{1/q}\). We can also define \(\ell _{\infty}\)-norm of x as \(\|x\|_{\infty}=\max_{1\leq i\leq N}\{|x_{i}|\}\) and \(\ell _{-\infty}\)-pseudonorm of x as \(\|x\|_{-\infty}=\min_{1\leq i\leq N}\{|x_{i}|\}\), respectively. Given \(x= (x_{1},x_{2},\ldots ,x_{N} )^{T}\in \mathbb{R}^{N}\), \(x_{\max (s)}\) denotes the vector that maintains the largest s entries in absolute value, and sets the others to zero. For a matrix \(D\in \mathbb{R}^{N\times d}\) and index subset \(T\subset \{1,2,\ldots ,d\}\), \(D_{T}\) is used as the matrix D restricted to the columns indexed by T, \(D^{*}_{T}\) is the conjugate transpose of \(D_{T}\) and \(T^{C}\) is the complement of T in \(\{1,2,\ldots ,d\}\). Given a vector \(h\in \mathbb{R}^{N}\), then \(D^{*}h= ( (D^{*}h )_{1}, (D^{*}h )_{2}, \ldots , (D^{*}h )_{d} )^{T}\in \mathbb{R}^{d}\). Suppose \(\{j_{1},j_{2},\ldots ,j_{d}\}\) is the rearrangement of \(\{1,2,\ldots ,d\}\) such that vector \(D^{*}h\) is monotonically decreasing in absolute value, that is, \(\vert (D^{*}h )_{j_{1}} \vert \geq \vert (D^{*}h )_{j_{2}} \vert \geq \cdots \geq \vert (D^{*}h )_{j_{d}} \vert \), then divide the set \(\{j_{1},j_{2},\ldots ,j_{d}\}\) into some subsets with cardinality s starting from its head, if the cardinality of the last subset is less than s then just keep it, that is \(T_{0}=\{j_{1},j_{2},\ldots ,j_{s}\}\), \(T_{1}=\{j_{s+1},j_{s+2},\ldots ,j_{2s}\}\), \(T_{2}=\{j_{2s+1},j_{2s+2},\ldots ,j_{3s}\}\), … . Here, let \(T=T_{0}\).
2 Some useful lemmas
First, we provide the relationship between \(\ell _{1}\) and the \(\ell _{q}\)-norm, which is used to estimate the error bound.
Lemma 3
([17]) Let \(0< q\leq 1\), \(x\in \mathbb{R}^{N}\),then
where \(Q_{q}=q^{\frac{q}{1-q}}-q^{\frac{1}{1-q}}\). Additionally, \(Q_{q}\) is a monotonous and convex function. The two limitations of this function with \(q\rightarrow 0^{+}\) and \(q\rightarrow 1^{-}\), are respectively:
The relationship between \(\ell _{2}\) and the \(\ell _{q}\)-norm is also required during the estimation of the error bound.
Lemma 4
For a fixed \(x\in \mathbb{R}^{N}\) and \(0< q\leq 1\), the following inequalities hold
Proof
According to the Cauchy–Schwarz inequality,
In [13], the relationship between the \(\ell _{1}\) and \(\ell _{2}\) norms is
Using Lemma 3, inequalities (8) and (9), we can derive the result. □
For index set \(T\subset \{1,2,\ldots ,N\}\), denote \(D^{*}_{T}x := ( D_{T} )^{*}x\). Suppose that x̂ is the solution to problem (3) and \(x\in \mathbb{R}^{N}\) satisfies \(\|y-Ax\|_{2}\leq \epsilon \). Let
then \(D^{*}h = ( (D^{*}h )_{1}, (D^{*}h )_{2}, \ldots , (D^{*}h )_{d} )^{T}\). Without generality, let \(\{j_{1},j_{2},\ldots ,j_{d}\}\) be a rearrangement of \(\{1,2,\ldots ,d\}\) such that
Then denote
Clearly, \(D^{*}h=\sum_{j\geq 0}D^{*}_{T_{i}}h\). Define ω and Ψ as follows
Thus, \(0\leq \omega \leq 1\) and \(\sum_{i\geq 2}\|D^{*}_{T_{i}}h\|^{q}_{q} = (1-\omega ) (\sum_{i \geq 1}\|D^{*}_{T_{i}}\|^{q}_{q} )\).
Li showed the following lemma in [17], which gives the bound of the \(\ell _{2}\)-norm square of \(D^{*}_{T_{i}}h\) with \(i \ge 2\). These results can be obtained from Lemma 4.1 and (3.5) of [17].
Lemma 5
(Lemma 4.1 and inequality (3.5) in [17]) Let \(0< q\leq 1\), h, \(\{T_{i}, i\geq 0\}\), and Ψ be defined as (10),(11), and (13), respectively, then the following inequalities hold:
where s denotes sparsity.
The bound of the \(\ell _{2}\)-norm of \(D^{*}_{T_{i}}h\) with \(i \ge 2\) is also required and is given by the following lemma.
Lemma 6
Let \(0< q\leq 1\), h, \(\{T_{i}, i\geq 0\}\), and ω be defined by (10), (11), and (12). Then,
Proof
According to the relation between the \(\ell _{2}\)-norm and \(\ell _{q}\)-norm in Lemma 4, we have
Summing up for i, we have
Note that
We have
By substituting (18) into (17) and combining (12), we can derive
Note that the second inequality in (19) uses the following conclusion: if \(a>0\), \(b>0\) and \(0< q<1\), then \((a+b)^{q} \leq a^{q} +b^{q}\). The third inequality in (19) uses the definition of ω in (12). Moreover, in the first term of the second line in (19),
Then we obtain the third line in the inequalities (19). The proof is completed. □
Two functions are defined as follows: \(0\leq \omega \leq 1\) and
According to the definition of Ψ and lemmas 5 and 6, we derive
and we have
From the fact below
and by combining Lemma 5, we can derive
which means that
Substituting inequalities (22) and (23) into the inequality above, we obtain
Now, let
and
Because of
we have \(\lambda \geq 1\) and
The above inequalities imply that
Therefore, the following conclusion is drawn:
Here, the inequality is used again, that is, if \(a>0\), \(b>0\), and \(0< q<1\), then \((a+b )^{q}\leq a^{q}+b^{q}\).
For any index set Ω with \(|\Omega |\leq s\), we have
and
which means that
Specifically, if the cardinality of Ω is s, that is, \(|\Omega |=s\), and it satisfies \(D^{*}_{\Omega}x = D^{*}x - (D^{*}x )_{\max (s)}\), then we have
We can derive
Define
Denote \((\frac{\beta (\omega _{1})}{1-\delta _{2s}} )^{q/2}\) by σ, that is, \(\sigma = (\frac{\beta (\omega _{1})}{1-\delta _{2s}} )^{q/2}\). When \(\delta _{2s}<\rho (q)\), to prove \(1-\sigma >0\) is equivalent to prove \(\beta (\omega _{1})/(1-\delta _{2s})<1\). By the definitions of \(\alpha (\omega )\) and \(\beta (\omega )\), we have
If \(\delta _{2s} <\rho (q)\), we have
Then we can derive that \(\beta (\omega _{1})/(1-\delta _{2s})<1\). Hence, we know that \(1-\sigma >0\), if \(\delta _{2s}<\rho (q)\). Therefore, by the inequality (29), we have
The following lemma is simple, but useful for estimating the error bound in the signal recovery.
Lemma 7
Let \(0< q \leq 1\), then
hold for all \(a\geq 0 \), \(b\geq 0\), and \(c\geq 0\).
Proof
Inequality (32) can be shown using Lemma 3 with \(N=2\), whereas (33) holds if both sides of the inequality are squared. □
3 Main results
We provide the error bound between the recovered signal x̂ and any solution to \(Ax =y\). This error bound is measured by the noise term ϵ and sparse term \(\|D^{*}x- (D^{*}x )_{\max (s)} \|_{q}\).
Theorem 8
Let D be the matrix with the columns forming a tight frame and x̂ be the solution of \(\ell _{q}\)-minimization. Then, for any fixed \(0< q\leq 1\) and D-RIP constant \(\delta _{2s}<\rho (q)\), we have
where
In this error bound, if the noise term \(\epsilon = 0\), it is a noiseless setting. If there exists a solution x that is s-sparse with tight frame D, the true signal x is recovered exactly in a noiseless setting.
Remark 9
In [17], Li and Lin solved the existence problem of \(q_{0}\) to recover a signal with coherent tight frames via \(\ell _{q}\)-minimization. However, the \(q_{0}\) was not provided in their paper. Actually, the value of \(q_{0}\) can be estimated.
If \(\omega =0\), then \({D^{*}}x = 0\), Theorem 8 holds true. For \(0<\omega \leq 1\), the following conclusion can be drawn.
Theorem 10
If the measurement matrix A satisfies the restricted isometry property with tight frame D and \(\delta _{2s}<0.3317\), then for any \(q \in (0,1]\), we have
where \(C_{0}\) and \(C_{1}\) are the constants in Theorem 8.
Remark 11
In fact, \(\delta _{2s}\) can take values much larger than 0.3317, i.e., if \(\delta _{2s}<0.493\), q can be arbitrary in the range of \((0,1]\), then \(\ell _{q}\)-minimization recovers the signal robustly with a coherent tight frame. Thus, the conclusion of Theorem 10 holds. However, this requires different proof.
In [17], Li and Lin showed that if \(\delta _{2s}< 1/2\), there exists a value \(q_{0}\) such that the signals can be recovered via \(\ell _{q}\)-minimization. The following theorem improves this result and provides an exact value for \(q_{0}\).
Theorem 12
If the measurement matrix A satisfies the restricted isometry property with tight frame D and \(\delta _{2s} < 1/2\), then there exists a value \(q_{0}= 2/3\), such that for any \(q \in (0,2/3]\), \(\delta _{2s} < 1/2 \leq \rho (q)\) holds. Furthermore,
where \(C_{0}\) and \(C_{1}\) are the constants in Theorem 8.
Remark 13
In [17], Li and Lin proved the existence of \(q_{0}\). However, there has been no estimation of \(q_{0}\) in [17]. For this problem, we not only prove a result similar to that in [17], but also estimate \(q_{0} = 2/3\).
Remark 14
\(q_{0}= 2/3\) is not the best value for \(q_{0}\), and can be much larger. The curve of \(\rho (q)\) drawn using MATLAB demonstrates that there exists \(q_{0}=0.97\) such that \(\delta _{2s} < 1/2 \leq \rho (q)\) holds; thus, Theorem 12 holds. However, this is considerably more difficult to achieve.
4 Proof of main results
We give here the proof procedure for each theorem.
4.1 Proof of theorem 8
Proof
Using inequality (15) in Lemma 5, we have
where the last inequality uses the result in (22). Therefore, by Lemma 7, we have
where \(C_{0}\) and \(C_{1}\) are given by (35), the second inequality uses the inequality (33) in Lemma 7, the third inequality uses inequality (14) in Lemma 5, and the fourth inequality uses inequality (32) in Lemma 7 and (31). □
4.2 Proof of theorem 10
Proof
We discuss the case where \(0<\omega \leq 1\). Theorem 8 shows that this conclusion holds as long as \({\delta _{2s}} < 0.3317 \leq \rho (q)\). According to the definition of \(\rho (q)\),
Therefore, Theorem 10 holds if for any \(0<\omega \leq 1\) and all \(q \in (0,1]\), the following inequality holds:
Let \(a:= 1/q \in [1,+\infty )\), then let
for all \(0<\omega \leq 1\) and all \(a \in [1,+\infty )\), where \(n_{1},n_{2} \in N^{+}\) and \(n_{1} \leq n_{2}\).
The following procedure estimates the lower bound of \({n_{1}}/{n_{2}}\). Inequality (39) is equivalent to
Inequality (40) holds if the infimum on the left is greater than or equal to the supremum on the right side. Let \(f(\omega ,a) = ( {{n_{2}} - {n_{1}}} ) - {n_{2}}{ \omega ^{2a - 1}} + ( {2{n_{2}} - {n_{1}}} ){\omega ^{2a}}\), \(g(\omega ,a) = {(1 - \omega )^{a}} + \frac{5}{4}{\omega ^{a}}\), calculate the partial derivatives of the two functions and let them be zeros. Then, we have
From equations (41) it can be derived that \(2a{n_{2}} = 2a{n_{2}} - {n_{2}}\), which does not hold because \({n_{2}} \in {N^{+}}\). Therefore, \(f(\omega ,a)\) has no stationary points. In equations (42), because \(\frac{\partial g}{\partial a} < 0\), \(g(\omega ,a)\) also has no stationary points.
By calculating the value of the bounds, we know that \(f(\omega , a)\) achieves its minimum value at \(a=1\), whereas \(g(\omega ,a)\) has its maximum value at \(\omega =1\). It is not difficult to compute this for all \(0<\omega \leq 1\) and all \(a \in [1,+\infty )\),
Inequality \(\inf_{\omega ,a} f(\omega ,a) \ge \sup_{\omega ,a} g(\omega ,a)\), i.e., \(\frac{{7n_{2}^{2} + 4n_{1}^{2} - 12{n_{1}}{n_{2}}}}{{4(2{n_{2}} - {n_{1}})}} \ge \frac{{25}}{{16}}{n_{1}}\), is equivalent to
Because \(0 < {n_{1}}/{n_{2}} \le 1\), inequality (43) holds when \(0 < {n_{1}}/{n_{2}} \le 0.3317\). In other words, 0.3317 is the lower bound of \(\rho (q)\), so \({\delta _{2s}} < 0.3317 \le \rho (q)\) holds. The proof is complete. □
4.3 Proof of theorem 12
Proof
Let \(a:=1/q \in [ 3/2,+\infty )\). According to the definitions of \(\rho (q)\) and a, we only need to prove that for all \(0< \omega \leq 1\) and all \(a \in [ 3/2,+\infty )\), the following inequality holds:
Inequality (44) is equivalent to
Let \(f(\omega ,a) = 1 - 2{\omega ^{2a - 1}} + 3{\omega ^{2a}} - { [ {{{(1 - \omega )}^{a}} + \frac{5}{4}{\omega ^{a}}} ]^{2}}\), then its partial derivative with respect to a is calculated as
Therefore, \(f(\omega ,a)\) has no stationary point and an extreme point at the bounds, and it is known that \(f(\omega ,a)\) reaches its minimum value at \(a=3/2\). To prove that \(f(\omega ,a) \ge 0\) for all ω and a, we must prove that for all ω, the following inequality holds,
Inequality (47) is equivalent to
and we derive,
The coefficient of \(\omega ^{4}\) is separated into two parts,
Let \(g(\omega ) = 9 - \frac{{145}}{4}\omega + 58{\omega ^{2}} - \frac{{85}}{2}{\omega ^{3}} + ( { \frac{{{{38}^{2}}}}{{{{16}^{2}}}} + \frac{{25}}{4}} ){\omega ^{4}}\), and its derivative is
For \(0 < \omega \leq 1\), because \(\frac{{d^{2}}g}{d{\omega ^{2}}} > 0\), we have \(\frac{dg}{d\omega } <0\), which means that \(g(\omega )\) decreases monotonically. Therefore, we know that \(g (\omega ) > 0 \), for any \(0 < \omega \leq 1 \). Because \(\frac{{{{39}^{2}} - {38}^{2}}}{{16}^{2}}{\omega ^{4}} > 0\), inequality (49) holds for any \(0 < \omega \leq 1\). The proof is complete. □
5 Conclusion
As for the q value problem of sparse signal recovery using \(\ell _{q}\)-minimization, the existence of q value has been proven, that is, if the measurement matrix satisfies D-RIP with \(\delta _{2s}\leq 1/2\), then there exists a value \(q_{0}\) such that for any \(q\in (0,q_{0}]\), any signal that is s-sparse with a tight frame can be robustly recovered to the true signal. In this work, we mainly estimated \(q_{0}\) as \(q_{0} = 2/3\) in the case of \(\delta _{2s}\leq 1/2\) and discussed that the value of \(q_{0}\) can be much higher. We also proved that if \(\delta _{2s}\leq 0.3317\), for any \(q\in (0,1]\), robust recovery for signals via \(\ell _{q}\)-minimization holds, which is consistent with the case of \(\ell _{q}\)-minimization without a tight frame.
Data availability
Not applicable.
References
Bi, N., Liang, K.H.: Iteratively reweighted algorithm for signals recovery with coherent tight frame. Math. Methods Appl. Sci. 41(14), 5481–5492 (2018)
Cai, T., Zhang, A.: Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmon. Anal. 35, 74–93 (2013)
Candès, E., Eldar, Y., Needell, D.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31(1) 59–73 (2011)
Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
Candès, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)
Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1999)
Christensen, O.: Frames and bases: an introductory course. Appl. Numer. Harmon. Anal. 32(5), 368–392 (2008)
Donoho, D., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _{1}\) minimization. Proc. Natl. Acad. Sci. USA 100, 2197–2202 (2003)
Erkoc, M., Karaboga, N.: A novel sparse reconstruction method based on multi-objective artificial bee colony algorithm. Signal Process. 189(12), 108283 (2021)
Gribonval, R., Nielsen, M.: Highly sparse representations from dictionaries are unique and independent of the sparseness measure. IEEE Trans. Inf. Theory 49(6), 1579–1581 (2003)
Huang, W., Zhang, C., Wu, S.: Nonconvex regularized sparse representation in a tight frame for gear fault diagnosis. Meas. Sci. Technol. 33, 085901 (2022)
Jyothi, R., Babu, P.: A monotonic algorithm to design large dimensional equiangular tight frames for applications in compressed sensing. Signal Process. 169(1), 1–17 (2022)
Lai, M., Liu, L.: A new estimate of restricted isometry constants for sparse solutions. Appl. Comput. Harmon. Anal. 30, 402–406 (2011)
Lal, B., Gravina, R., Spagnolo, F., et al.: Compressed sensing approach for physiological signals: a review. IEEE Sens. J. 23(6), 5513–5534 (2023)
Lee, B., Ko, K., Hong, J., et al.: Information bottleneck measurement for compressed sensing image reconstruction. IEEE Signal Process. Lett. 29, 1943–1947 (2022)
Li, P., Ge, H., Geng, P.: Signal and image reconstruction with tight frames via unconstrained \(\ell _{1}-\alpha \ell _{2}\)-analysis minimizations (2021). arXiv:2112.14510
Li, S., Lin, J.: Compressed sensing with coherent tight frames via \(\ell _{q}\)-minimization for \(0< q\leq 1\). Inverse Probl. Imaging 8, 761–777 (2017)
Liang, K.H., Bi, N.: A new upper bound of p for \(L_{p}\)-minimization in compressed sensing. Signal Process. 176(1), 1–12 (2020)
Liang, K.H., Clay, M.: Iterative re-weighted least squares algorithm for \(L_{p}\)-minimization with tight frame and \(0< p \le 1\). Linear Algebra Appl. 581(1), 413–434 (2019)
Liang, K.H., Li, S., Zhang, W., et al.: Reconstruction of enterprise debt networks based on compressed sensing. Sci. Rep. 13, 2514–2522 (2023)
Loss, T., Colbrook, M., Hansen, A.: Stratified sampling based compressed sensing for structured signals. IEEE Trans. Signal Process. 70, 3530–3539 (2022)
Luo, X., Yang, W., Ha, J., et al.: Non-convex block-sparse compressed sensing with coherent tight frames. EURASIP J. Adv. Signal Process. 12(2), 1–13 (2020)
Mi, M., Che, Y., Li, H., Zhao, S.: Identification of rotor position of permanent magnet spherical motor based on compressed sensing. IEEE Trans. Ind. Inform. 19(8), 9157–9164 (2023)
Natarajan, B.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227–234 (1995)
Needell, D., Tropp, J.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2008)
Nguyen, T., Jagatap, G., Hegde, C.: Provable compressed sensing with generative priors via Langevin dynamics. IEEE Trans. Inf. Theory 68(11), 7410–7422 (2022)
Okabe, Y., Kanemoto, D., Maida, O., Hirose, T.: Compressed sensing EEG measurement technique with normally distributed sampling series. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 105(10), 1429–1433 (2022)
Rauhut, H., Schnass, K., Vandergheynst, P.: Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory 54, 2210–2219 (2008)
Rudelson, M., Vershynin, R.: On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61, 1025–1045 (2008)
Sekar, K., Devi, K., Srinivasan, P.: Compressed tensor completion: a robust technique for fast and efficient data reconstruction in wireless sensor networks. IEEE Sens. J. 22(11), 10794–10807 (2022)
Wang, Y., Liu, Y., Bai, X., et al.: Sequential color ghost imaging based on compressed sensing algorithm of post-processing measurement matrix. Phys. Scr. 98, 045110 (2023)
Wei, P., He, F.: The compressed sensing of wireless sensor networks based on Internet of things. IEEE Sens. J. 21(22), 25267–25273 (2021)
Yang, H., Yu, N.: A fast algorithm for joint sparse signal recovery in 1-bit compressed sensing. AEÜ, Int. J. Electron. Commun. 138(8), 153856 (2021)
Zhang, R., Li, S.: Optimal D-RIP bounds in compressed sensing. Acta Math. Sin. 31(6), 755–766 (2015)
Zhang, R., Li, S.: Optimal RIP bounds for sparse signals recovery via \(\ell _{p}\) minimization. Appl. Comput. Harmon. Anal. 47(3), 566–584 (2019)
Zhou, J., Kato, B., Wang, Y.: Operational modal analysis with compressed measurements based on prior information. Measurement 211, 112644 (2023)
Acknowledgements
This research was partly supported by the NSF of China under grant nos. 11471012 and 11971491, the NSF of Guangdong under grant nos. 2018A0303130136 and 2017A030310650, the Science and Technology Planning Project of Guangdong under grant nos. 2015A070704059 and 2015A030402008, project of Education Department of Guangdong Province under grant no. 2020KZDZX1120, the college students’ innovation and entrepreneurship training program under grant no. 202211347036, the Common Technical Innovation Team of Guangdong Province on Preservation and Logistics of Agricultural Products under grant nos. 2021KJ145 and 2023KJ145, Guangzhou Science and Technology Project under grant no. 201704030131, and the Characteristic Innovation Project of Universities in Guangdong (Natural Science) under grant no. 2018KTSCX094. The authors gratefully acknowledge all the sponsors. This paper is one of the achievements of the Agri-product Digital Logistics Research Center of Guangdong-Hong Kong-Macao Greater Bay Area. The corresponding authors of this paper are Kaihao Liang and Wenfeng Zhang.
Funding
This research was partly supported by the NSF of Guangdong under grant nos. 2018A0303130136 and 2017A030310650, the Science and Technology Planning Project of Guangdong under grant nos. 2015A070704059 and 2015A030402008, project of Education Department of Guangdong Province under grant no. 2020KZDZX1120, the college students’ innovation and entrepreneurship training program under grant no. 202211347036, the Common Technical Innovation Team of Guangdong Province on Preservation and Logistics of Agricultural Products under grant nos. 2021KJ145 and 2023KJ145, Guangzhou Science and Technology Project under grant no. 201704030131, and the Characteristic Innovation Project of Universities in Guangdong (Natural Science) under grant no. 2018KTSCX094. This paper is one of the achievements of Agri-product Digital Logistics Research Center of Guangdong-Hong Kong-Macao Greater Bay Area.
Author information
Authors and Affiliations
Contributions
Kaihao Liang proved the main theorem and wrote the manuscript, Chaolong Zhang provided the proof of some lemmas and the discussion of the result of the main theorem, Wenfeng Zhang provided the funding for this project, as well as the research ideas. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liang, K., Zhang, C. & Zhang, W. Estimation of q for \(\ell _{q}\)-minimization in signal recovery with tight frame. J Inequal Appl 2023, 156 (2023). https://doi.org/10.1186/s13660-023-03068-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-03068-z