Skip to main content

Shearlet approximations to the inverse of a family of linear operators

Abstract

The Radon transform plays an important role in applied mathematics. It is a fundamental problem to reconstruct images from noisy observations of Radon data. Compared with traditional methods, Colona, Easley and etc. apply shearlets to deal with the inverse problem of the Radon transform and receive more effective reconstruction. This paper extends their work to a class of linear operators, which contains Radon, Bessel and Riesz fractional integration transforms as special examples.

MSC:42C15, 42C40.

1 Introduction and preliminary

The Radon transform is an important tool in medical imaging. Although f L 1 ( R 2 ) can be recovered analytically from the Radon data Rf(θ,t), the solution is unstable and those data are corrupted by some noise in practice [1]. In order to recover the object f stably and control the amplification of noise in the reconstruction, many methods of regularization were introduced including the Fourier method, singular value decomposition, etc. [2]. However, those methods produced a blurred version of the original one.

Curvelets and shearlets were then proposed, which proved to be efficient in dealing with edges [37]. In 2002, Candés and Donoho applied curvelets [5] to the inverse problem

Y=Rf+εW,
(1.1)

where the recovered function f is compactly supported and twice continuously differentiable away from a smooth edge; W denotes a Wiener sheet; ε is a noisy level. Because curvelets have complicated structure, Colonna, Easley, etc. used shearlets to deal with the problem (1.1) in 2010 and received an effective reconstructive algorithm [8].

Note that the Bessel transform and the Riesz fractional integration transform arise in many scientific areas ranging from physical chemistry to extragalactic astronomy. Then this paper considers a more general problem,

Y=Kf+εW,
(1.2)

where K stands for a linear operator mapping the Hilbert space L 2 ( R 2 ) to another Hilbert space Y and satisfies

( K K f ) (ξ)= ( b + | ξ | 2 ) α f ˆ (ξ)
(1.3)

with b>0, α>0 ( K is the conjugate operator of K). Here and in what follows, f ˆ denotes the Fourier transform of f. The next section shows that Radon, Bessel and Riesz fractional integration transforms satisfy the condition (1.3).

The current paper is organized as follows. Section 2 presents three examples for (1.3) and several lemmas. An approximation result is proved in the last section, which contains Theorem 4.2 of [8] as a special case.

At the end of this section, we introduce some basic knowledge of shearlets, which will be used in our discussions. The Fourier transform of a function f L 1 ( R 2 ) L 2 ( R 2 ) is defined by

f ˆ (ξ)= R 2 f(x) e 2 π i x ξ dx.

The classical method extends that definition to L 2 ( R 2 ) functions.

There exist many different constructions for discrete shearlets. We introduce the construction [8] by taking two functions ψ 1 , ψ 2 of one variable such that ψ ˆ 1 , ψ ˆ 2 C (R) with their supports supp ψ ˆ 1 [ 1 2 , 1 16 ][ 1 16 , 1 2 ], supp ψ ˆ 2 [1,1] and

j 0 | ψ 1 ˆ ( 2 2 j ω ) | 2 =1 ( | ω | 1 8 ) , l = 2 j 2 j 1 | ψ 2 ˆ ( 2 j ω l ) | 2 =1 ( | ω | 1 ) .

Here, C ( R n ) stands for infinitely many times differentiable functions on the Euclidean space R n . Then two shearlet functions ψ ( 0 ) , ψ ( 1 ) are defined by

ψ ˆ ( 0 ) (ξ):= ψ ˆ 1 ( ξ 1 ) ψ ˆ 2 ( ξ 2 ξ 1 ) and ψ ˆ ( 1 ) (ξ):= ψ 1 ˆ ( ξ 2 ) ψ 2 ˆ ( ξ 1 ξ 2 )

respectively.

To introduce discrete shearlets, we need two shear matrices

B 0 =( 1 1 0 1 ), B 1 =( 1 0 1 1 )

and two dilation matrices

A 0 =( 4 0 0 2 ), A 1 =( 2 0 0 4 ).

Define discrete shearlets ψ j , l , k ( d ) (x):= 2 3 2 j ψ ( d ) ( B d l A d j xk) for j0, 2 j l 2 j 1, k Z 2 and d=0,1. Then there exists φ ˆ C 0 ( R 2 ) such that

{ φ j 0 , k ( x ) , ψ j , l , k ( d ) ( x ) , j j 0 0 , 2 j l 2 j 1 , k Z 2 , d = 0 , 1 }

forms a Parseval frame of L 2 ( R 2 ), where φ j 0 , k (x):= 2 j 0 φ( 2 j 0 xk). More precisely, for f L 2 ( R 2 ),

f(x)= k Z 2 f, φ j 0 , k φ j 0 , k (x)+ d = 0 1 j j 0 l = 2 j 2 j 1 k Z 2 f , ψ j , l , k ( d ) ψ j , l , k ( d ) (x)

holds in L 2 ( R 2 ). It should be pointed out that ψ j , l , k ( d ) (x) are modified for l= 2 j and 2 j 1, as seen in [8].

2 Examples and lemmas

In this section, we provide three important examples of a linear operator K satisfying ( K K f ) (ξ)= ( b + | ξ | 2 ) α f ˆ (ξ) and present some lemmas which will be used in the next section. To introduce the first one, define a subspace of L 2 ( R 2 ),

D ( R 2 ) := { f L 1 ( R 2 ) , f  is bounded } { f L 2 ( R 2 ) , R 2 | ξ | 1 | f ˆ ( ξ ) | 2 d ξ < + }

and a Hilbert space

L 2 ( [ 0 , π ) × R ) := { f ( θ , t ) , 0 π R | f ( θ , t ) | 2 d t d θ < + }

with the inner product f,g:= 0 π R f(θ,t) g ( θ , t ) ¯ dtdθ.

Example 2.1 Let L θ , t :={(x,y),xcosθ+ysinθ=t} R 2 and ds(x,y) be the Euclidean measure on the line L θ , t . Then the classical Radon transform R: D(R) L 2 ([0,π)×R) defined by

Rf(θ,t)= L θ , t f(x,y)ds(x,y)

satisfies ( R R f ) (ξ)= | ξ | 1 f ˆ (ξ).

Proof By the definition of D( R 2 ), R 2 | ξ | 1 f ˆ (ξ) g ˆ ( ξ ) ¯ dξ<+ for f,gD( R 2 ). It is easy to see that 0 2 π dθ 0 + f ˆ (ωcosθ,ωsinθ) g ˆ ( ω cos θ , ω sin θ ) ¯ dω= 0 π dθ R f ˆ (ωcosθ,ωsinθ)× g ˆ ( ω cos θ , ω sin θ ) ¯ dω. This with the Fourier slice theorem ([1, 9]) and the Plancherel formula leads to

R 2 | ξ | 1 f ˆ ( ξ ) g ˆ ( ξ ) ¯ d ξ = 0 2 π d θ 0 + f ˆ ( ω cos θ , ω sin θ ) g ˆ ( ω cos θ , ω sin θ ) ¯ d ω = 0 π d θ R ( R θ f ) ( ω ) ( R θ g ) ( ω ) ¯ d ω = 0 π d θ R R f ( θ , t ) R g ( θ , t ) ¯ d t = R f , R g ,

where R θ f(t):=Rf(θ,t). Moreover, ( R R f ) , g ˆ = R Rf,g=Rf,Rg= | ξ | 1 f ˆ (ξ), g ˆ (ξ) for each gD( R 2 ).

Because D( R 2 ) is dense in L 2 ( R 2 ), one receives the desired conclusion ( R R f ) (ξ)= | ξ | 1 f ˆ (ξ). Here, R Rf L 2 ( R 2 ) for fD( R 2 ). In fact, R Rf=4π I 1 f by [10], where I 1 is the Riesz fractional integration transform defined by

I 1 (f)(x):= C α R 2 f ( x y ) | y | dy

with some normalizing constant C α [11]. We rewrite I 1 (f)(x)= | y | r f ( x y ) | y | dy+ | y | > r f ( x y ) | y | dy=: J 1 + J 2 . Let h(y)= 1 | y | 1 B ( 0 , 1 ) (y), h r (y)= 1 r 2 h( y r ), where B(0,1) stands for the unit ball of R 2 and 1 A represents an indicator function on the set A. Then J 1 = | y | r f ( x y ) | y | dy=r | y | r h r (y)f(xy)dyrMf(x) by Theorem 9 of reference [[12], p.59], where Mf is the Hardy-Littlewood maximal function of f.

On the other hand, the Holder inequality implies

J 2 f p p 1 ( | y | > r 1 | y | p d y ) 1 p r 2 p f p p 1

with p>3. Take r= [ M f ( x ) ] 1 p 1 , one gets I 1 (f)(x) [ M f ( x ) ] p 2 p 1 (1+ f p p 1 ) and I 1 ( f ) 2 (1+ f p p 1 ) M f 2 ( p 2 ) p 1 p 2 p 1 (1+ f p p 1 ) f 2 ( p 2 ) p 1 p 2 p 1 <+, since f L p p 1 ( R 2 ) L 2 ( p 2 ) p 1 ( R 2 ) due to the assumption fD( R 2 ) and 2 ( p 2 ) p 1 >1, p p 1 >1.

In order to introduce the next example, we use fg to denote the convolution of f and g. □

Example 2.2 The Bessel operator B α : L 2 ( R 2 ) L 2 ( R 2 ) defined by B α f= b α f with b ˆ α (ξ)= ( 1 + | ξ | 2 ) α 2 and α>0 satisfies

( B α B α f ) (ξ)= ( 1 + | ξ | 2 ) α f ˆ (ξ).

Proof It is known that b α (x) L 1 ( R 2 ) for α>0 [11]. Hence, ( B α f ) (ξ)= b ˆ α (ξ) f ˆ (ξ)= ( 1 + | ξ | 2 ) α 2 f ˆ (ξ). For f,g L 2 ( R 2 ), ( B α B α f ) , g ˆ = B α B α f,g= B α f, B α g= ( B α f ) , ( B α g ) = ( 1 + | ξ | 2 ) α f ˆ (ξ), g ˆ (ξ). Thus,

( B α B α f ) (ξ)= ( 1 + | ξ | 2 ) α f ˆ (ξ).

 □

To introduce the Riesz fractional integration transform, we define

D= { f L 2 ( R 2 ) , f  has compact support } L 1 ( R 2 ) L 2 ( R 2 ) .

Then D L s ( R 2 ) (1s2). For fD and 0<α<1, the Riesz fractional integration transform is defined by

I α (f)(x):= C α R 2 f ( y ) | x y | 2 α dy L 2 ( R 2 ) ,
(2.1)

where C α is the normalizing constant [11]. In order to show ( I α I α f ) (ξ)= | ξ | 2 α f ˆ (ξ) for fD and 0<α<1/2, we need a lemma ([11], Lemma 2.15).

Lemma 2.1 Let S( R 2 ) be the Schwartz space and Ψ={ψS( R 2 ), β x β ψ(0)=0,β Z + × Z + } with Z + being the non-negative integer set. Define Φ:={φ= ψ ˆ ,ψΨ}. Then with α>0,

( I α f ) (ξ)= | ξ | α f ˆ (ξ)

holds for each fΦ.

Example 2.3 The transform I α defined by (2.1) satisfies ( I α I α f ) (ξ)= | ξ | 2 α f ˆ (ξ) for fD and 0<α< 1 2 .

Proof As proved in Examples 2.1, 2.2, it is sufficient to show that for fD,

( I α f ) (ξ)= | ξ | α f ˆ (ξ).
(2.2)

One proves (2.2) firstly for f C 0 ( R 2 ). Take μ(r) C ([0,)) with 0μ(r)1 and

μ(r)={ 1 , r 2 ; 0 , 0 r 1 .

Define ψ N (ξ):=μ(N|ξ|) f ˆ (ξ). Then ψ N (ξ)Ψ and f N (x):= ψ ˇ N (x)= ψ ˆ N (x)Φ. By Lemma 2.1,

( I α f N ) (ξ)= | ξ | α f ˆ N (ξ).
(2.3)

Let k(x) be the inverse Fourier transform of the function 1μ(|x|) and k N (x):= 1 N 2 k( x N ). Then R 2 k(x)dx=1 and f N (x)=f(x) k N f(x). Moreover, the classical approximation theorem [11] tells

lim N f N f p =0

for p>1. On the other hand, I α f N I α f 2 = I α ( f N f ) 2 C f N f 2 1 + α due to Theorem 16 [[12], p.69]. Hence, lim N ( I α f N ) ( ξ ) ( I α f ) ( ξ ) 2 = lim N I α f N I α f 2 =0. That is,

lim N ( I α f N ) (ξ)= ( I α f ) (ξ)
(2.4)

in L 2 ( R 2 ) sense. Note that | ξ | α f ˆ N ( ξ ) | ξ | α f ˆ ( ξ ) 2 2 = R 2 | ξ | 2 α | f ˆ ( ξ ) | 2 [ 1 μ ( N | ξ | ) ] 2 dξ; | ξ | 2 α | f ˆ ( ξ ) | 2 L 1 ( R 2 ) with 0<α< 1 2 and lim N [1μ(N|ξ|)]=0. Then

lim N | ξ | α f ˆ N ( ξ ) | ξ | α f ˆ ( ξ ) 2 =0
(2.5)

thanks to the Lebesgue dominated convergence theorem, which means lim N | ξ | α × f ˆ N (ξ)= | ξ | α f ˆ (ξ) in L 2 ( R 2 ) sense. This with (2.3), (2.4) shows (2.2) for f C 0 ( R 2 ).

In order to show (2.2) for fD, one can find g C 0 ( R 2 ) such that R 2 g(x)dx=1 and lim N f g N f p =0 (p1) by Theorem 4.2.1 in [13], where g N ()= N 2 g(N). Since f g N C 0 ( R 2 ), the above proved fact says

( I α ( f g N ) ) (ξ)= | ξ | α ( f g N ) (ξ).
(2.6)

The same arguments as (2.4) and (2.5) show that lim N ( I α ( f g N ) ) (ξ)= ( I α f ) (ξ) and lim N | ξ | α ( f g N ) (ξ)= | ξ | α f ˆ (ξ). Hence,

( I α f ) (ξ)= | ξ | α f ˆ (ξ).

This completes the proof of (2.2) for fD. □

Next, we prove a lemma which will be used in the next section. For convenience, here and in what follows, we define M=NM with N= Z 2 , M:={μ=(j,l,k,d):j j 0 , 2 j l 2 j 1,k Z 2 ,d=0,1}. Then the shearlet system (introduced in Section 1) can be represented as { s μ :μM}, where s μ = ψ μ = ψ j , l , k ( d ) if μM, and s μ = φ μ = φ j 0 , k if μN.

Lemma 2.2 Let K satisfy ( K K f ) (ξ)= ( b + | ξ | 2 ) α f ˆ (ξ) and { s μ ,μM} be shearlets introduced in the first section. Define σ ˆ μ (ξ)= ( b + | ξ | 2 ) α s ˆ μ (ξ) and U μ := 2 2 α j K σ μ . Then U μ C and for μM,

f, s μ = 2 2 α j Kf, U μ .

Proof By the Plancherel formula and the assumption σ ˆ μ (ξ)= ( b + | ξ | 2 ) α s ˆ μ (ξ), one knows that f, s μ = f ˆ , s ˆ μ = f ˆ (ξ), ( b + | ξ | 2 ) α σ ˆ μ (ξ). Moreover,

f, s μ = f ˆ ( ξ ) , ( K K σ μ ) ( ξ ) = f , K K σ μ =Kf,K σ μ = 2 2 α j Kf, U μ

due to ( K K f ) (ξ)= ( b + | ξ | 2 ) α f ˆ (ξ) and U μ := 2 2 α j K σ μ .

Next, one shows U μ C. Note that K σ μ 2 =K σ μ ,K σ μ = K K σ μ , σ μ = ( K K σ μ ) , σ ˆ μ , ( K K σ μ ) = ( b + | ξ | 2 ) α σ ˆ μ (ξ) and σ ˆ μ (ξ)= ( b + | ξ | 2 ) α s ˆ μ (ξ). Then K σ μ 2 = s ˆ μ (ξ), ( b + | ξ | 2 ) α s ˆ μ (ξ) and

U μ 2 = 2 4 α j K σ μ 2 = 2 4 α j R 2 ( b + | ξ | 2 ) α | s ˆ μ ( ξ ) | 2 dξ.

Because supp s ˆ μ C j := [ 2 2 j 1 , 2 2 j 1 ] 2 [ 2 2 j 4 , 2 2 j 4 ] 2 , one receives U μ 2 = 2 4 α j C j ( b + | ξ | 2 ) α | s ˆ μ ( ξ ) | 2 dξC. This completes the proof of Lemma 2.2. □

At the end of this section, we introduce two theorems which are important for our discussions. As in [8], we use STAR 2 (A) to denote all sets B [ 0 , 1 ] 2 with C 2 boundary ∂B given by

β(θ)=( ρ ( θ ) cos θ ρ ( θ ) sin θ )

in a polar coordinate system. Here, ρ(θ) ρ 0 <1 and | ρ (θ)|A. Define ε 2 (A):={f= f 0 + f 1 X B ,B STAR 2 (A)}, where f 0 , f 1 C 0 2 ( [ 0 , 1 ] 2 ) are compactly supported on [ 0 , 1 ] 2 . Let c μ :=f, s μ , M j :={(j,l,k,d),|k| 2 2 j + 1 , 2 j l 2 j 1,d=0,1} and

R(j,ε)= { μ M j : | c μ | > ε } .

Then with R(j,ε) standing for the cardinality of R(j,ε), the following conclusion holds [8].

Theorem 2.3 For f ε 2 (A), R(j,ε)C ε 2 3 and

μ M j | c μ | 2 C 2 2 j .

Theorem 2.4 [14]

Let XN(u,1) and t= 2 log ( η 1 ) with 0<η 1 2 . Then

E | T s ( X , t ) u | 2 = [ 2 log ( η 1 ) + 1 ] ( η + min { u 2 , 1 } ) ,

where N(u,1) denotes the normal distribution with mean u and variance 1, while T s (y,t):=sgn(y) ( | y | t ) + is the soft thresholding function.

3 Main theorem

In this section, we give an approximation result, which extends the result [[8], Theorem 4.2] from the Radon transform to a family of linear operators. To do that, we introduce a set N(ε) of significant shearlet coefficients as follows. Let

s 1 = 1 9 2 + 6 α log 2 ( ε 1 ) , s 2 = 1 3 2 + 2 α log 2 ( ε 1 ) ,

and j 0 = s 1 , j 1 = s 2 . Define N(ε):=M(ε)N(ε)M, where

Consider the model Y=Kf+εW with ( K K f ) (ξ)= ( b + | ξ | 2 ) α f ˆ (ξ). Lemma 2.2 tells that y μ := 2 2 α j Y, U μ =f, s μ +ε 2 2 α j n μ , where n μ is Gaussian noise with zero mean and bounded variance σ μ 2 = U μ 2 C [15]. Let c μ =f, s μ and f ˜ = μ N ( ε ) c ˜ μ s μ with

c ˜ μ ={ T s ( y μ , ε 2 log ( N ( ε ) ) 2 2 α j σ μ ) , μ N ( ε ) ; 0 , otherwise ,

where T s (y,t) is the soft thresholding function. Then the following result holds.

Theorem 3.1 Let f ε 2 (A) be the solution to Y=Kf+εW with ( K K f ) (ξ)= ( b + | ξ | 2 ) α f ˆ (ξ) and f ˜ be defined as above. Then

sup f ε 2 ( A ) E f ˜ f 2 Clog ( ε 1 ) ε 2 3 2 + 2 α (ε0).

Here and in what follows, E stands for the expectation operator.

Proof Since { s μ ,μM} is a Parseval frame, f= μ M c μ s μ and f ˜ = μ N ( ε ) c ˜ μ s μ , f ˜ f= μ M ( c ˜ μ c μ ) s μ . Moreover, f ˜ f 2 = μ M | c ˜ μ c μ | 2 and

E f ˜ f 2 = μ N ( ε ) E | c μ ˜ c μ | 2 + μ N ( ε ) C | c μ | 2 .
(3.1)

In order to estimate μ N ( ε ) C | c μ | 2 , one observes μ M j | c μ | 2 C 2 2 j due to Theorem 2.3. Then j > j 1 μ M j | c μ | 2 C j > j 1 2 2 j C 2 2 j 1 . By 2 j 1 ε 1 3 2 + 2 α ,

j > j 1 μ M j | c μ | 2 ε 2 3 2 + 2 α .
(3.2)

(Here and in what follows, AB denotes ACB for some constant C>0).

Next, one considers c μ for j 0 j j 1 and |k| 2 2 j + 1 . Note that | ψ ( d ) (x)| C m ( 1 + | x | ) m (d=0,1, m=1,2,). Then | ψ j , l , k ( d ) (x)| C m 2 3 2 j ( 1 + | B d l A d j x k | ) m . Since f ε 2 (A), suppf Q 0 := [ 0 , 1 ] 2 and

| f , ψ j , l , k ( d ) | C m 2 3 2 j f Q 0 ( 1 + | B d l A d j x k | ) m dx.

On the other hand, | B d l A d j x| B d l A d j |x| 2 2 j |x| 2 2 2 j for x Q 0 . Hence, ( 1 + | B d l A d j x k | ) m ( 1 + | k | | B d l A d j x | ) m ( | k | 2 2 2 j ) m for |k| 2 2 j + 1 . Moreover, | k | 2 2 j + 1 | c μ | 2 2 3 j | k | 2 2 j + 1 ( | k | 2 2 2 j ) 2 m = 2 3 j n = 1 2 2 j + n | k | 2 2 j + n + 1 2 4 m j ( 2 n 2 ) 2 m 2 3 j 2 4 m j × n = 1 2 2 ( 2 j + n + 1 ) ( 2 n 2 ) 2 m 2 3 j 2 2 j ( 2 m 2 ) , since m can be chosen big enough. Therefore,

j = j 0 j 1 l = 2 j 2 j 1 | k | 2 2 j + 1 | c μ | 2 C m j = j 0 2 8 j 2 4 m j 2 8 j 0 2 6 j 0 ε 2 3 2 + 2 α
(3.3)

due to the choice of j 0 . The similar (even simpler) arguments show | k | 2 2 j 0 + 1 | f , φ j 0 , k | 2 ε 2 3 2 + 2 α with φ j 0 , k (x)= 2 j 0 φ( 2 j 0 xk). This with (3.2) and (3.3) leads to

μ N ( ε ) C | c μ | 2 ε 2 3 2 + 2 α .
(3.4)

Finally, one estimates μ N ( ε ) E | c μ ˜ c μ | 2 . By the definition of y μ , ε 1 2 2 α j σ μ 1 y μ N( ε 1 2 2 α j σ μ 1 c μ ,1). Applying Theorem 2.4 with η 1 =N(ε), one obtains that

Hence, E | T s [ y μ , ε 2 2 α j σ μ 2 log ( N ( ε ) ) ] c μ | 2 [2log(N(ε))+1][ ε 2 2 4 α j σ μ 2 N ( ε ) +min{ c μ 2 , ε 2 2 4 α j σ μ 2 }]. By c ˜ μ = T s [ y μ ,ε 2 2 α j σ μ 2 log ( N ( ε ) ) ] for μN(ε), one knows that

(3.5)

Note that N(ε) M j {(j,l,k,d):|k| 2 2 j + 1 ,|l| 2 j }. Then N(ε)C j j 1 2 5 j 2 5 j 1 ε 5 3 2 + 2 α , and log(N(ε)) 10 3 2 + 2 α log( ε 1 )log( ε 1 ). Since { σ μ :μM} is uniformly bounded, [2log(N(ε))+1] ε 2 μ N ( ε ) 2 4 α j σ μ 2 N ( ε ) log( ε 1 ) ε 2 2 4 α j 1 . This with the choice of 2 j 1 shows that

[ 2 log ( N ( ε ) ) + 1 ] ε 2 μ N ( ε ) 2 4 α j σ μ 2 N ( ε ) log ( ε 1 ) ε 3 3 2 + 2 α log ( ε 1 ) ε 2 3 2 + 2 α .
(3.6)

It remains to estimate [2log(N(ε))+1] μ N ( ε ) min{ c μ 2 , ε 2 2 4 α j σ μ 2 }. Clearly,

μ N ( ε ) min { c μ 2 , ε 2 2 4 α j } = { μ N ( ε ) : | c μ | 2 2 α j ε } 2 4 α j ε 2 + { μ N ( ε ) : | c μ | < 2 2 α j ε } | c μ | 2 .
(3.7)

By Theorem 2.3, { μ M j : | c μ | 2 2 α j ε } 2 4 α j ε 2 ( 2 2 α j ε ) 2 3 2 4 α j ε 2 2 8 3 α j ε 4 3 . Hence,

{ μ N ( ε ) : | c μ | 2 2 α j ε } 2 4 α j ε 2 = j = j 0 j 1 { μ M j : | c μ | 2 2 α j ε } 2 4 α j ε 2 2 8 3 α j 1 ε 4 3 .
(3.8)

On the other hand, { μ N ( ε ) : | c μ | < 2 2 α j ε } | c μ | 2 = j = j 0 j 1 n = 0 { 2 2 α j n 1 ε < | c μ | 2 2 α j n ε } | c μ | 2 . According to Theorem 2.3, R(j, 2 2 α j n 1 ε) 2 2 3 ( 2 α j n 1 ) ε 2 3 and

{ 2 2 α j n 1 ε < | c μ | 2 2 α j n ε } | c μ | 2 2 2 3 ( 2 α j n 1 ) ε 2 3 2 2 ( 2 α j n ) ε 2 2 8 3 α j 2 4 3 n 2 2 3 ε 4 3 .

Therefore,

{ μ N ( ε ) : | c μ | < 2 2 α j ε } | c μ | 2 j = j 0 j 1 n = 0 2 8 3 α j 2 4 3 n ε 4 3 2 8 3 α j 1 ε 4 3 .

Combining this with (3.7) and (3.8), one knows that μ N ( ε ) min{ c μ 2 , ε 2 2 4 α j } 2 8 3 α j 1 ε 4 3 . Furthermore,

μ N ( ε ) min { c μ 2 , ε 2 2 4 α j } ε 2 2 α + 3 2
(3.9)

thanks to 2 j 1 ε 1 2 α + 3 2 . Now, it follows from (3.5), (3.6) and (3.9) that

μ N ( ε ) E | c μ ˜ c μ | 2 log ( ε 1 ) ε 2 2 α + 3 2 .

This with (3.1) and (3.4) leads to the desired conclusion sup f ε 2 ( A ) E f ˜ f 2 Clog( ε 1 )× ε 2 3 2 + 2 α . The proof is completed. □

References

  1. Natterer F: The Mathematics of Computerized Tomography. Wiley, New York; 1986.

    MATH  Google Scholar 

  2. Natterer F, Wübbeling F SIAM Monographs on Mathematical Modeling and Computation. In Mathematical Methods in Image Reconstruction. SIAM, Philadelphia; 2001.

    Chapter  Google Scholar 

  3. Candés EJ, Donoho DL: Continuous curvelet transform: II. Discretization and frames. Appl. Comput. Harmon. Anal. 2005, 19: 198–222. 10.1016/j.acha.2005.02.004

    Article  MATH  MathSciNet  Google Scholar 

  4. Candés EJ, Donoho DL:New tight frames of curvelets and optimal representations of objects with C 2 singularities. Commun. Pure Appl. Math. 2004, 57: 219–266. 10.1002/cpa.10116

    Article  MATH  Google Scholar 

  5. Candés EJ, Donoho DL: Recovering edges in ill-posed inverse problems: optimality of curvelet frames. Ann. Stat. 2002, 30: 784–842.

    Article  MATH  Google Scholar 

  6. Easley G, Labate D, Lim WQ: Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 2008, 25: 25–46. 10.1016/j.acha.2007.09.003

    Article  MATH  MathSciNet  Google Scholar 

  7. Guo K, Labate D: Optimally sparse multidimensional representation using shearlets. SIAM J. Math. Anal. 2007, 39: 298–318. 10.1137/060649781

    Article  MATH  MathSciNet  Google Scholar 

  8. Colonna F, Easley G, Guo K, Labate D: Radon transform inversion using the shearlet representation. Appl. Comput. Harmon. Anal. 2010, 29: 232–250. 10.1016/j.acha.2009.10.005

    Article  MATH  MathSciNet  Google Scholar 

  9. Lee NY, Lucier BJ: Wavelet methods for inverting the Radon transform with noisy data. IEEE Trans. Image Process. 2001, 10: 1079–1094.

    MathSciNet  Google Scholar 

  10. Strichartz RS: Radon inversion-variations on a theme. Am. Math. Mon. 1982, 89: 377–384. 10.2307/2321649

    Article  MATH  MathSciNet  Google Scholar 

  11. Samko SG: Hypersingular Integrals and Their Applications. Taylor & Francis, London; 2002.

    MATH  Google Scholar 

  12. Zhou MQ: Harmonic Analysis. Beijing Normal University Press, Beijing; 1999.

    Google Scholar 

  13. Lu SZ, Wang KY: Real Analysis. Beijing Normal University Press, Beijing; 2005.

    Google Scholar 

  14. Donoho DL, Johnstone IM: Ideal spatial adaptation via wavelet shrinkage. Biometrika 1994, 81: 425–455. 10.1093/biomet/81.3.425

    Article  MATH  MathSciNet  Google Scholar 

  15. Johnstone, IM: Gaussian Estimation: Sequence and Wavelet Models. http://www-stat.stanford.edu/imj/GE12–27–11 (1999)

    Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 11271038) and the Natural Science Foundation of Beijing (No. 1082003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Youming Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

LH and YL finished this work together. Two authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Hu, L., Liu, Y. Shearlet approximations to the inverse of a family of linear operators. J Inequal Appl 2013, 11 (2013). https://doi.org/10.1186/1029-242X-2013-11

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-11

Keywords