Open Access

Analysis of a new variational model for image multiplicative denoising

Journal of Inequalities and Applications20132013:568

https://doi.org/10.1186/1029-242X-2013-568

Received: 14 August 2013

Accepted: 5 November 2013

Published: 27 November 2013

Abstract

In this paper, we study the mathematical properties of a new variational model for image multiplicative noise removal. Some important properties of the model, including the lower semicontinuity, the differential property, the convergence and regularization property, are established for the first time. The existence and uniqueness of a solution for the problem as well as a comparison principle have also been established.

MSC: Primary 49J40; 65K10; secondary 94A08.

Keywords

regularizationvariational approachBV function spacemultiplicative noise

1 Introduction

Consider the following variational problem:
min u E ( u ) : = min u { Ω ϕ ( D u ) + λ Ω ( log u + f u ) } ,
(1)
where Ω R N is an open bounded open set with Lipschitz-regular boundary Ω, λ is a constant, f : Ω R + is a given function, ϕ is an even function from R N to R having the linear growth
ϕ ( s ) = { | s | log ( 1 + | s | ) , | s | < M , b | s | M 2 1 + M , | s | M ,
(2)

where b = M / ( 1 + M ) + log ( 1 + M ) , M is a positive constant and its value is determined by the size of an image.

The total variation has been introduced in computer vision first by Rudin, Osher and Fatemi [1] as a regularizing criterion for solving inverse problems. It has proved to be very efficient for regularizing images without smoothing the boundaries of the objects. A variety of variational methods have been proposed for imaging processing over the last decades, and the main variational approaches devoted to multiplicative noises include the RLO model [2] proposed by Rudin et al., the AA model [3] by Aubert and Aujol, and the JY model [4] by Jin and Yang.

Variational problem (1) is a multiplicative noise removal model. This model was first obtained via the MAP estimator [5]. It is specifically devoted to the denoising of images corrupted by a gamma noise. This model cannot yield good denoising effectiveness, but it can significantly avoid or reduce ‘edge blur’ and ‘step effect’.

From a mathematical point of view, most of the existing variational models for denoising have been studied extensively [24, 6]. For example, for a very informative discussion of the use of total variation regularization in the field of image processing, see the introduction of [7]. In [6], Aubert and Aujol proved the existence of a minimizer for a variational model and studied the associated evolution problem, for which they derived existence and uniqueness results for the solution. Moreover, they proved the convergence of an implicit iterative scheme to compute the solution. Jin and Yang in [4] established the existence and uniqueness of a weak solution for the associated evolution equations of the JY model, and showed that the solution of the evolution equation converges weakly in BV and strongly in L 2 to the minimizer as t . For model (1), the initial boundary value problem of the partial differential equation for the model is derived and discreted numerically. The experiments in [5] showed that the quality of the images restored by the model is excellent. However, in order to further study and apply model (1), further rigorous work is needed to investigate the mathematical properties of the model.

The main goal of this paper is thus to further study the mathematical properties of model (1). The study is conducted in the space of functions with bounded variations (BV). We first establish some important mathematical properties for the function ϕ in the model including the lower semicontinuity, the differential property, the convergence and regularization of its mollification ϕ ϵ . Then it is easy to prove the existence and uniqueness of a solution for the model under appropriate assumptions. Furthermore, a comparison principle is obtained.

The paper is organized as follows. Some preliminaries are given in the next section. In Section 3, some properties about the function ϕ are obtained. Some new results for variational model (1) such as the existence, uniqueness and comparison results are derived in Section 4. Finally, a conclusion is given in Section 5.

2 Preliminaries

In this paper, we use the following classical distributional spaces. For the convenience of readers, we here recall some basic notations and facts, and for details we refer the readers to the works of Aubert-Kornprobst in [6].

Let | E | denote the Lebesgue measure of a measurable set E in R N , let H N 1 denote the Hausdorff measure of dimension N 1 , and let W 1 , p ( Ω ) and L p ( Ω ) be the standard notations for the Sobolev and Lebesgue spaces, respectively, let C 0 ( Ω ) be the set of functions in C ( Ω ) with compact support in Ω.

Definition 2.1 If for all functions φ = ( φ 1 , φ 2 ) C 0 1 ( Ω ) 2 , | φ | L ( Ω ) | 1 , the formula
Ω u div φ d x = Ω D u φ d x
holds, D u = ( D 1 u , D 2 u ) is called the distribution gradient of u, u is called a bounded variation function, and the total variation of Du on Ω is defined as follows:
Ω | D u | d x : = sup { Ω u div φ d x : φ = ( φ 1 , φ 2 ) C 0 1 ( Ω ) 2 , | φ | L ( Ω ) 1 } .
Definition 2.2 The space of bounded variation functions, BV ( Ω ) , is defined as follows:
BV ( Ω ) = { u L 1 ( Ω ) : Ω | D u | d x < } .

It should be addressed here that the BV ( Ω ) endowed with the norm u BV ( Ω ) = Ω | u | d x + Ω | D u | d x is a Banach space.

With regard to the compactness in BV ( Ω ) , we state the following theorem.

Theorem 2.1 ([8])

Assume that { u n } n = 1 is a bounded sequence in BV ( Ω ) . Then there exists a subsequence { u n j } j = 1 and a function u BV ( Ω ) such that u n j u as j in L 1 ( Ω ) .

Definition 2.3 The approximate upper limit u + ( x ) and the approximate lower limit u ( x ) , respectively, are defined by
u + ( x ) = inf { t [ , + ] : lim ρ 0 + | { u > t } B ( x , ρ ) | ρ 2 = 0 } , u ( x ) = sup { t [ , + ] : lim ρ 0 + | { u < t } B ( x , ρ ) | ρ 2 = 0 } ,
where B ( x , ρ ) is the ball of center x and radius ρ. When u + ( x ) is different from u ( x ) , we define the jump set S u as follows:
S u = { x Ω : u ( x ) < u + ( x ) } .
After choosing a normal n u ( x ) ( x S u ) pointing toward the largest value of u, we recall the following decompositions (see [9] for more details):
D u = u d x + D s u ,
(3)

where u is the density of the absolutely continuous part of Du with respect to the Lebesgue measure, D s u = C u + ( u + u ) n u H | S u N 1 is the singular part, C u is the Cantor part, and ( u + u ) n u H | S u N 1 is the jump part, H | S u N 1 is the Hausdorff measure restricted to the set  S u .

Let η ϵ be the usual mollifier with compact support B ( 0 , ϵ ) and R N η ϵ ( y ) d y = 1 . If g L loc 1 ( Ω ) , define its mollification g ϵ : = ( g η ϵ ) in Ω ϵ : = { x Ω : dist ( x , Ω ) > ϵ } . That is,
g ϵ ( x ) = Ω g ( y ) η ϵ ( x y ) d y = B ( 0 , ϵ ) g ( x y ) η ϵ ( y ) d y for  x Ω ϵ .

Using the standard properties of mollifiers, we have the following properties.

Theorem 2.2
  1. (1)

    If 1 p < and g L loc p ( Ω ) , then g ϵ C ( Ω ) and g ϵ g in L loc p ( Ω ) and if g L p ( Ω ) , then g ϵ g in L p ( Ω ) .

     
  2. (2)

    If A g ( x ) B for all x, then A g ϵ ( x ) B for all x Ω ϵ .

     
  3. (3)

    If h , g L 1 ( Ω ) , then Ω h ϵ g d x = Ω h g ϵ d x .

     
  4. (4)

    If g C 1 ( Ω ) , then g ϵ x i = ( g x i ) ϵ .

     

Proof Using the standard properties of mollifiers, we can easily prove the above results. □

3 Some properties about ϕ

By the definition of ϕ in (2), it is easy to see that ϕ belongs to C 1 ( R N ) and ϕ is Lipschitz continuous with the gradient
ϕ ( s ) = { [ log ( 1 + | s | ) + | s | 1 + | s | ] s | s | , | s | M , b s | s | , | s | > M .
Furthermore, it can be proved that
| ϕ ( s 1 ) ϕ ( s 2 ) | b | s 1 s 2 | ,
(4)
| ϕ ( s 1 ) ϕ ( s 2 ) | b | s 1 s 2 | .
(5)
Theorem 3.1 If ϕ ϵ = ϕ η ϵ , where η ϵ is the mollifier, then
  1. (1)

    ϕ ϵ C ( R N ) and ϕ ϵ is convex.

     
  2. (2)

    ϕ ϵ ϕ uniformly on compact subsets of R N as ϵ 0 .

     
  3. (3)

    ϕ ϵ ( x ) x 0 for all x R N .

     
Proof (1) Since ϕ is continuous and convex, ϕ ϵ is clearly smooth and convex in R N (cf. Theorem 6 of Appendix C in [8]).
  1. (2)
    According to (4), for all x R N , we have
    lim r 0 1 | B ( x , r ) | B ( x , r ) | ϕ ( y ) ϕ ( x ) | d x = 0 .
    (6)
     
For a fixed point x, by the definition of ϕ ϵ , we have from (6) that
| ϕ ϵ ( x ) ϕ ( x ) | = | B ( x , ϵ ) η ϵ ( x y ) [ ϕ ( y ) ϕ ( x ) ] d y | 1 ϵ N B ( x , ϵ ) η ( x y ϵ ) | ϕ ( y ) ϕ ( x ) | d y C | B ( x , ϵ ) | B ( x , ϵ ) | ϕ ( y ) ϕ ( x ) | d y 0 as  ϵ 0 .

Now ϕ C ( R N ) and V R N , we choose V W R N and note that ϕ is uniformly continuous on W. Thus the limit (6) holds uniformly for x V . Consequently, the calculation above implies ϕ ϵ ϕ uniformly on V.

(3) Let ϵ < 1 4 . The convexity of ϕ ϵ implies that ϕ ϵ ( 0 ) ϕ ϵ ( x ) ϕ ϵ ( x ) ( 0 x ) , so ϕ ϵ ( x ) x ϕ ϵ ( x ) ϕ ϵ ( 0 ) . Further, we have
ϕ ϵ ( x ) ϕ ϵ ( 0 ) = B ( 0 , ϵ ) ϕ ( x y ) η ϵ ( y ) d y B ( 0 , ϵ ) ϕ ( y ) η ϵ ( y ) d y = B ( 0 , ϵ ) B ( x , 1 ) [ | x y | log ( 1 + | x y | ) | y | log ( 1 + | y | ) ] η ϵ ( y ) d y + B ( 0 , ϵ ) { | x y | 1 } [ b | x y | 1 2 | y | log ( 1 + | y | ) ] η ϵ ( y ) d y .

For the first term on the right-hand side, we consider two possible cases.

Case 1. | x | 2 ϵ . Since | y | ϵ , it implies | x y | ϵ | x | . Noting that the function ϕ ( s ) is increasing on ( 0 , + ) , we obtain that | x y | log ( 1 + | x y | ) | y | log ( 1 + | y | ) 0 . So the first term on the right-hand side is nonnegative.

Case 2. | x | < 2 ϵ . Since | y | ϵ < 1 3 , | x y | | x | + | y | 2 ϵ + ϵ 3 4 < 1 , and so B ( 0 , ϵ ) B ( x , 1 ) = B ( 0 , ϵ ) . In terms of the convexity of ϕ, we have
| x y | log ( 1 + | x y | ) | y | log ( 1 + | y | ) ϕ ( y ) ( ( y x ) y ) = ( log ( 1 + | y | ) + | y | 1 + | y | ) y | y | ( x ) .
Noting that B ( 0 , ϵ ) y η ϵ ( y ) d y = 0 , we get
B ( 0 , ϵ ) B ( x , 1 ) [ | x y | log ( 1 + | x y | ) | y | log ( 1 + | y | ) ] η ϵ ( y ) d y B ( 0 , ϵ ) [ ( log ( 1 + | y | ) + | y | 1 + | y | ) y | y | ( x ) ] η ϵ ( y ) d y = 0 .
For the second term on the right-hand side, noting that ϵ < 1 , we have
B ( 0 , ϵ ) { | x y | 1 } [ b | x y | 1 2 | y | log ( 1 + | y | ) ] η ϵ ( y ) d y B ( 0 , ϵ ) { | x y | 1 } ( b 1 2 log 2 ) η ϵ ( y ) d y = 0 .

Therefore, ϕ ϵ ( x ) ϕ ϵ ( 0 ) 0 . □

Remark Since ϕ : R R + is convex, increasing on R + with linear growth at infinity, and ϕ ( 1 ) = lim s ϕ ( s ) / s = b , for u BV ( Ω ) , we can get the Lebesgue decomposition of the measure Ω ϕ ( D u ) d x :
Ω ϕ ( D u ) d x = Ω ϕ ( u ) d x + b Ω | D s u | = Ω ϕ ( u ) d x + b Ω S u | C u | + b S u | u + u | d H N 1 .
Since b | s | M 2 / ( 1 + M ) ϕ ( s ) b | s | , for all s R N , we have
Ω | D u | M 2 1 + M | Ω | Ω ϕ ( D u ) Ω | D u | , u BV ( Ω ) .
(7)

The lower semicontinuity of the functional ϕ with respect to convergence in L 1 ( Ω ) , which is one of the most important properties of BV functions, is given in the next theorem below.

Theorem 3.2 Assume that { u i } i = 1 BV ( Ω ) and u i u in L 1 ( Ω ) as i , then
Ω ϕ ( D u ) lim inf i Ω ϕ ( D u i ) .
(8)
Proof Let v C 0 1 ( Ω ) be such that | v | 1 . Then
Ω u div v d x = lim i Ω u i div v d x lim inf i Ω | D u i | .
Taking the supremum over all such v, we have
Ω | D u | lim inf i Ω | D u i | .
(9)

Thus (8) follows on combining (7) and (9). □

Theorem 3.3 (Regularization)

Suppose u BV ( Ω ) . If u ϵ are the mollified functions described in Section  2 (where u is extended to be 0 outside Ω if necessary), then
lim ϵ 0 Ω ϕ ( D u ϵ ) = Ω ϕ ( D u ) .
Proof Since u ϵ u as ϵ 0 in L 1 ( Ω ) from Theorem 2.2, we have, by Theorem 3.2, the inequality
Ω ϕ ( D u ) lim inf ϵ 0 Ω ϕ ( D u ϵ ) ,

and so it remains only to prove a verse inequality.

For u BV ( Ω ) , by the method of the proof in Theorem A.2 of [10], we can obtain
Ω ϕ ( D u ) = sup v C 0 1 ( Ω ) | v | 1 Ω ( | v | log ( 1 + | v | ) + u div v ) d x .
(10)
Suppose v C 0 1 ( Ω ) and | v | 1 , then, by Theorem 2.2,
Ω u ϵ div v d x = Ω u ( div v ) ϵ d x = Ω u div v ϵ d x
and
lim ϵ 0 Ω | v ϵ | log ( 1 + | v ϵ | ) = Ω | v | log ( 1 + | v | ) .
Thus
Ω ( | v | log ( 1 + | v | ) + u ϵ div v ) d x = lim ϵ 0 Ω ( | v ϵ | log ( 1 + | v ϵ | ) + u div v ϵ ) d x .
(11)
Since | v | 1 , by Theorem 2.2, we get that | v ϵ | 1 . On taking the supremum over all such v and noting (10), we get that
Ω ϕ ( D u ϵ ) = lim ϵ 0 Ω ( | v ϵ | log ( 1 + | v ϵ | ) + u div v ϵ ) d x lim ϵ 0 sup | v ϵ | 1 Ω ( | v ϵ | log ( 1 + | v ϵ | ) + u div v ϵ ) d x = Ω ϕ ( D u ) .
Thus
lim inf ϵ 0 Ω ϕ ( D u ϵ ) Ω ϕ ( D u ) .

 □

4 Some facts for the variational problem

Consider the variational problem. Assume that f L 1 ( Ω ) with 0 < inf Ω f f sup Ω f < , find a function u ˜ S ( Ω ) such that
E ( u ˜ ) = min u S ( Ω ) { Ω ϕ ( D u ) + Ω ( log u + f u ) } ,
(12)

where S ( Ω ) = { u BV ( Ω ) : u L 1 ( Ω ) 2 } .

For the existence of solutions for minimization problem (12), we have obtained the existence result in the BV space (see [5]), namely, for f L ( Ω ) with inf Ω f > 0 , problem (12) has at least one solution u BV ( Ω ) satisfying 0 < inf Ω f u sup Ω f .

We find that, under appropriate assumptions, we can prove the existence of a Lipschitz solution for problem (12) as detailed below.

Theorem 4.1 (Existence of a Lipschitz solution)

Suppose that Ω is a bounded open domain in R N and f L 1 ( Ω ) with 0 < inf Ω f f sup Ω f < . Then problem (12) has at least one Lipschitz solution if for any x Ω , there exist two planes in R N + 1 , z = π + ( x ) and z = π ( x ) such that
  1. (i)

    π ( x ) f ( x ) π + ( x ) , x Ω ;

     
  2. (ii)

    the slopes of these planes are uniformly bounded, independent of x, by a constant K. That is, | D π ± | K for all x Ω .

     

Proof Consider the approximation of problem (12), that is, replacing ϕ ( s ) by ϕ ϵ ( s ) + | s | log ( 1 + | s | ) . By the general elliptic theory (see [8]), there exists a minimizer u ϵ for each ϵ > 0 . The proof in Section 10 of [11] shows that u ϵ has a uniform gradient bound which is not greater than K, then, by applying to a subsequence of u ϵ as ϵ 0 , we get a Lipschitz function u, and the rest of the proof is straightforward. □

Theorem 4.2 (Uniqueness)

Let f > 0 be in L ( Ω ) , then problem (12) has at most one solution u ˜ such that 0 < u ˜ < 2 f .

Proof Letting τ ( u ) = log u + f / u , we have τ ( u ) = 1 / u + ( u f ) / u 2 and τ ( u ) = ( 2 f u ) / u 3 . We thus deduce that if 0 < u < 2 f , then τ is strictly convex. Using (7) and noting that ϕ : R R + is convex, increasing on R + with linear growth at infinity, we obtain the uniqueness of a minimizer. □

Theorem 4.3 (Comparison principle)

Let f 1 and f 2 be in L ( Ω ) with inf Ω f 1 > 0 and inf Ω f 2 > 0 . Assume that f 1 < f 2 in Ω. We denote by u 1 (resp., u 2 ) a solution of (12) for f = f 1 (resp., f = f 2 ). Then we have u 1 u 2 .

Proof Let J ( u ) = Ω ϕ ( D u ) . From Theorem 4 in [5], we know that u 1 and u 2 exist. Since u i is a minimizer with data f i ( i = 1 , 2 ), we have
Ω ( log ( inf ( u 1 , u 2 ) ) + f 1 inf ( u 1 , u 2 ) ) + J ( inf ( u 1 , u 2 ) ) Ω ( log u 1 + f 1 u 1 ) + J ( u 1 )
(13)
and
Ω ( log ( sup ( u 1 , u 2 ) ) + f 2 sup ( u 1 , u 2 ) ) + J ( sup ( u 1 , u 2 ) ) Ω ( log u 2 + f 2 u 2 ) + J ( u 2 ) .
(14)
Using the fact that J ( inf ( u 1 , u 2 ) ) + J ( sup ( u 1 , u 2 ) ) J ( u 1 ) + J ( u 2 ) (see [12] or [13] for details), and adding these two inequalities (13) and (14), we obtain
Ω ( log ( inf ( u 1 , u 2 ) ) + log ( sup ( u 1 , u 2 ) ) + f 1 inf ( u 1 , u 2 ) + f 2 sup ( u 1 , u 2 ) ) Ω ( log u 1 + log u 2 + f 1 u 1 + f 2 u 2 ) .
(15)
Writing Ω = { u 1 > u 2 } { u 1 u 2 } , we easily get that
{ u 1 > u 2 } ( log u 2 + log u 1 + f 1 u 2 + f 2 u 1 ) { u 1 > u 2 } ( log u 1 + log u 2 + f 1 u 1 + f 2 u 2 ) ,
this is,
{ u 1 > u 2 } ( f 1 f 2 ) u 1 u 2 u 1 u 2 0 .

Since f 1 < f 2 , we thus deduce that { u 1 > u 2 } has a zero Lebesgue measure; i.e., u 1 u 2 a.e. in Ω. □

Remark The above result agrees with observation in image processing. Suppose that f 1 and f 2 are two images with noise. u 1 and u 2 are from f 1 and f 2 after denoising by some method, respectively. If the image f 1 is dimer than f 2 at almost every pixel, then u 1 is not naturally lighter than u 2 .

5 Conclusion

In this paper, we study the mathematical properties of an important variational model proposed in our recent work [5] for removing multiplicative noise. For the first time, many important properties of the model, including the lower semicontinuity, the differential property, the convergence and regularization property are established and proved. The well-posedness of the underlying mathematical problem for the model has also been established. The comparison principle of solutions for the model has also been obtained.

Declarations

Acknowledgements

The first author is supported partly by the National Natural Science Foundation of China (11071266), and partly by the Chinese Scholarship Council during the author’s visit to Curtin University of Technology. The second author is supported by the Australian Research Council.

Authors’ Affiliations

(1)
College of Mathematics and Physics, Chongqing University of Posts and Telecommunications
(2)
Department of Mathematics and Statistics, Curtin University

References

  1. Rudin L, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259–268. 10.1016/0167-2789(92)90242-FView ArticleMathSciNetMATHGoogle Scholar
  2. Rudin L, Lions PL, Osher S: Multiplicative denoising and deblurring: theory and algorithms. In Geometric Level Sets in Imaging, Vision, and Graphics. Edited by: Osher S, Paragios N. Springer, Berlin; 2003:103–119.View ArticleGoogle Scholar
  3. Aubert G, Aujol JF: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 2008, 68: 925–946. 10.1137/060671814MathSciNetView ArticleMATHGoogle Scholar
  4. Jin ZM, Yang XP: A variational model to remove the multiplicative noise in ultrasound images. J. Math. Imaging Vis. 2011, 39: 62–74. 10.1007/s10851-010-0225-3MathSciNetView ArticleMATHGoogle Scholar
  5. Hu XG, Zhang LT, Jiang W: Improved variational model to remove multiplicative noise based on partial differential equation. J. Comput. Appl. 2012, 32: 1879–1881.Google Scholar
  6. Aubert G, Kornprobst P Appl. Math. Sci. 147. In Mathematical Problems in Image Processing. Springer, New York; 2002.Google Scholar
  7. Chan TF, Esedoglu S:Aspects of total variation regularized L 1 function approximation. SIAM J. Appl. Math. 2005, 65: 1817–1837. 10.1137/040604297MathSciNetView ArticleMATHGoogle Scholar
  8. Lawrence CE: Partial Differential Equations. Am. Math. Soc., Providence; 1998.MATHGoogle Scholar
  9. Ambrosio L: A compactness theorem for a new class of functions of bounded variation. Boll. Unione Mat. Ital. 1989, 7: 857–881.MathSciNetMATHGoogle Scholar
  10. Hardt R, Kinderlehrer D: Elastic plastic deformation. Appl. Math. Optim. 1983, 10: 203–246. 10.1007/BF01448387MathSciNetView ArticleMATHGoogle Scholar
  11. Hartman P, Stampacchia G: On some non-linear elliptic differential functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210MathSciNetView ArticleMATHGoogle Scholar
  12. Chambolle A: An algorithm for mean curvature motion. Interfaces Free Bound. 2004, 6: 195–218.MathSciNetView ArticleMATHGoogle Scholar
  13. Giusti E: Minimal Surfaces and Functions of Bounded Variation. Birkhäuser, Basel; 1994.MATHGoogle Scholar

Copyright

© Hu et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.