 Research
 Open access
 Published:
Analysis of a new variational model for image multiplicative denoising
Journal of Inequalities and Applications volume 2013, Article number: 568 (2013)
Abstract
In this paper, we study the mathematical properties of a new variational model for image multiplicative noise removal. Some important properties of the model, including the lower semicontinuity, the differential property, the convergence and regularization property, are established for the first time. The existence and uniqueness of a solution for the problem as well as a comparison principle have also been established.
MSC: Primary 49J40; 65K10; secondary 94A08.
1 Introduction
Consider the following variational problem:
where \mathrm{\Omega}\subset {R}^{N} is an open bounded open set with Lipschitzregular boundary ∂ Ω, λ is a constant, f:\mathrm{\Omega}\to {R}^{+} is a given function, ϕ is an even function from {R}^{N} to R having the linear growth
where b=M/(1+M)+log(1+M), M is a positive constant and its value is determined by the size of an image.
The total variation has been introduced in computer vision first by Rudin, Osher and Fatemi [1] as a regularizing criterion for solving inverse problems. It has proved to be very efficient for regularizing images without smoothing the boundaries of the objects. A variety of variational methods have been proposed for imaging processing over the last decades, and the main variational approaches devoted to multiplicative noises include the RLO model [2] proposed by Rudin et al., the AA model [3] by Aubert and Aujol, and the JY model [4] by Jin and Yang.
Variational problem (1) is a multiplicative noise removal model. This model was first obtained via the MAP estimator [5]. It is specifically devoted to the denoising of images corrupted by a gamma noise. This model cannot yield good denoising effectiveness, but it can significantly avoid or reduce ‘edge blur’ and ‘step effect’.
From a mathematical point of view, most of the existing variational models for denoising have been studied extensively [2–4, 6]. For example, for a very informative discussion of the use of total variation regularization in the field of image processing, see the introduction of [7]. In [6], Aubert and Aujol proved the existence of a minimizer for a variational model and studied the associated evolution problem, for which they derived existence and uniqueness results for the solution. Moreover, they proved the convergence of an implicit iterative scheme to compute the solution. Jin and Yang in [4] established the existence and uniqueness of a weak solution for the associated evolution equations of the JY model, and showed that the solution of the evolution equation converges weakly in BV and strongly in {L}^{2} to the minimizer as t\to \mathrm{\infty}. For model (1), the initial boundary value problem of the partial differential equation for the model is derived and discreted numerically. The experiments in [5] showed that the quality of the images restored by the model is excellent. However, in order to further study and apply model (1), further rigorous work is needed to investigate the mathematical properties of the model.
The main goal of this paper is thus to further study the mathematical properties of model (1). The study is conducted in the space of functions with bounded variations (BV). We first establish some important mathematical properties for the function ϕ in the model including the lower semicontinuity, the differential property, the convergence and regularization of its mollification {\varphi}^{\u03f5}. Then it is easy to prove the existence and uniqueness of a solution for the model under appropriate assumptions. Furthermore, a comparison principle is obtained.
The paper is organized as follows. Some preliminaries are given in the next section. In Section 3, some properties about the function ϕ are obtained. Some new results for variational model (1) such as the existence, uniqueness and comparison results are derived in Section 4. Finally, a conclusion is given in Section 5.
2 Preliminaries
In this paper, we use the following classical distributional spaces. For the convenience of readers, we here recall some basic notations and facts, and for details we refer the readers to the works of AubertKornprobst in [6].
Let E denote the Lebesgue measure of a measurable set E in {R}^{N}, let {\mathcal{H}}^{N1} denote the Hausdorff measure of dimension N1, and let {W}^{1,p}(\mathrm{\Omega}) and {L}^{p}(\mathrm{\Omega}) be the standard notations for the Sobolev and Lebesgue spaces, respectively, let {C}_{0}^{\mathrm{\infty}}(\mathrm{\Omega}) be the set of functions in {C}^{\mathrm{\infty}}(\mathrm{\Omega}) with compact support in Ω.
Definition 2.1 If for all functions \phi =({\phi}_{1},{\phi}_{2})\in {C}_{0}^{1}{(\mathrm{\Omega})}^{2}, {\phi }_{{L}^{\mathrm{\infty}}(\mathrm{\Omega})}\le 1, the formula
holds, Du=({D}_{1}u,{D}_{2}u) is called the distribution gradient of u, u is called a bounded variation function, and the total variation of Du on Ω is defined as follows:
Definition 2.2 The space of bounded variation functions, \mathit{BV}(\mathrm{\Omega}), is defined as follows:
It should be addressed here that the \mathit{BV}(\mathrm{\Omega}) endowed with the norm {\parallel u\parallel}_{\mathit{BV}(\mathrm{\Omega})}={\int}_{\mathrm{\Omega}}u\phantom{\rule{0.2em}{0ex}}dx+{\int}_{\mathrm{\Omega}}Du\phantom{\rule{0.2em}{0ex}}dx is a Banach space.
With regard to the compactness in \mathit{BV}(\mathrm{\Omega}), we state the following theorem.
Theorem 2.1 ([8])
Assume that {\{{u}_{n}\}}_{n=1}^{\mathrm{\infty}} is a bounded sequence in \mathit{BV}(\mathrm{\Omega}). Then there exists a subsequence {\{{u}_{{n}_{j}}\}}_{j=1}^{\mathrm{\infty}} and a function u\in \mathit{BV}(\mathrm{\Omega}) such that {u}_{{n}_{j}}\to u as j\to \mathrm{\infty} in {L}^{1}(\mathrm{\Omega}).
Definition 2.3 The approximate upper limit {u}^{+}(x) and the approximate lower limit {u}^{}(x), respectively, are defined by
where B(x,\rho ) is the ball of center x and radius ρ. When {u}^{+}(x) is different from {u}^{}(x), we define the jump set {S}_{u} as follows:
After choosing a normal {n}_{u}(x) (x\in {S}_{u}) pointing toward the largest value of u, we recall the following decompositions (see [9] for more details):
where ∇u is the density of the absolutely continuous part of Du with respect to the Lebesgue measure, {D}^{s}u={C}_{u}+({u}^{+}{u}^{}){n}_{u}{\mathcal{H}}_{{S}_{u}}^{N1} is the singular part, {C}_{u} is the Cantor part, and ({u}^{+}{u}^{}){n}_{u}{\mathcal{H}}_{{S}_{u}}^{N1} is the jump part, {\mathcal{H}}_{{S}_{u}}^{N1} is the Hausdorff measure restricted to the set {S}_{u}.
Let {\eta}_{\u03f5} be the usual mollifier with compact support B(0,\u03f5) and {\int}_{{R}^{N}}{\eta}_{\u03f5}(y)\phantom{\rule{0.2em}{0ex}}dy=1. If g\in {L}_{\mathrm{loc}}^{1}(\mathrm{\Omega}), define its mollification {g}^{\u03f5}:=(g\ast {\eta}_{\u03f5}) in {\mathrm{\Omega}}_{\u03f5}:=\{x\in \mathrm{\Omega}:dist(x,\partial \mathrm{\Omega})>\u03f5\}. That is,
Using the standard properties of mollifiers, we have the following properties.
Theorem 2.2

(1)
If 1\le p<\mathrm{\infty} and g\in {L}_{\mathrm{loc}}^{p}(\mathrm{\Omega}), then {g}^{\u03f5}\in {C}^{\mathrm{\infty}}(\mathrm{\Omega}) and {g}^{\u03f5}\to g in {L}_{\mathrm{loc}}^{p}(\mathrm{\Omega}) and if g\in {L}^{p}(\mathrm{\Omega}), then {g}^{\u03f5}\to g in {L}^{p}(\mathrm{\Omega}).

(2)
If A\le g(x)\le B for all x, then A\le {g}^{\u03f5}(x)\le B for all x\in {\mathrm{\Omega}}_{\u03f5}.

(3)
If h,g\in {L}^{1}(\mathrm{\Omega}), then {\int}_{\mathrm{\Omega}}{h}^{\u03f5}g\phantom{\rule{0.2em}{0ex}}dx={\int}_{\mathrm{\Omega}}h{g}^{\u03f5}\phantom{\rule{0.2em}{0ex}}dx.

(4)
If g\in {C}^{1}(\mathrm{\Omega}), then \frac{\partial {g}^{\u03f5}}{\partial {x}_{i}}={(\frac{\partial g}{\partial {x}_{i}})}^{\u03f5}.
Proof Using the standard properties of mollifiers, we can easily prove the above results. □
3 Some properties about ϕ
By the definition of ϕ in (2), it is easy to see that ϕ belongs to {C}^{1}({R}^{N}) and ϕ is Lipschitz continuous with the gradient
Furthermore, it can be proved that
Theorem 3.1 If {\varphi}^{\u03f5}=\varphi \ast {\eta}_{\u03f5}, where {\eta}_{\u03f5} is the mollifier, then

(1)
{\varphi}^{\u03f5}\in {C}^{\mathrm{\infty}}({R}^{N}) and {\varphi}^{\u03f5} is convex.

(2)
{\varphi}^{\u03f5}\to \varphi uniformly on compact subsets of {R}^{N} as \u03f5\to 0.

(3)
\mathrm{\nabla}{\varphi}^{\u03f5}(x)\cdot x\ge 0 for all x\in {R}^{N}.
Proof (1) Since ϕ is continuous and convex, {\varphi}^{\u03f5} is clearly smooth and convex in {R}^{N} (cf. Theorem 6 of Appendix C in [8]).

(2)
According to (4), for all x\in {R}^{N}, we have
\underset{r\to 0}{lim}\frac{1}{B(x,r)}{\int}_{B(x,r)}\varphi (y)\varphi (x)\phantom{\rule{0.2em}{0ex}}dx=0.(6)
For a fixed point x, by the definition of {\varphi}^{\u03f5}, we have from (6) that
Now \varphi \in C({R}^{N}) and V\subset \subset {R}^{N}, we choose V\subset \subset W\subset \subset {R}^{N} and note that ϕ is uniformly continuous on W. Thus the limit (6) holds uniformly for x\in V. Consequently, the calculation above implies {\varphi}^{\u03f5}\to \varphi uniformly on V.
(3) Let \u03f5<\frac{1}{4}. The convexity of {\varphi}^{\u03f5} implies that {\varphi}^{\u03f5}(0){\varphi}^{\u03f5}(x)\ge \mathrm{\nabla}{\varphi}^{\u03f5}(x)\cdot (0x), so \mathrm{\nabla}{\varphi}^{\u03f5}(x)\cdot x\ge {\varphi}^{\u03f5}(x){\varphi}^{\u03f5}(0). Further, we have
For the first term on the righthand side, we consider two possible cases.
Case 1. x\ge 2\u03f5. Since y\le \u03f5, it implies xy\ge \u03f5\ge x. Noting that the function \varphi (s) is increasing on (0,+\mathrm{\infty}), we obtain that xylog(1+xy)ylog(1+y)\ge 0. So the first term on the righthand side is nonnegative.
Case 2. x<2\u03f5. Since y\le \u03f5<\frac{1}{3}, xy\le x+y\le 2\u03f5+\u03f5\le \frac{3}{4}<1, and so B(0,\u03f5)\cap B(x,1)=B(0,\u03f5). In terms of the convexity of ϕ, we have
Noting that {\int}_{B(0,\u03f5)}y{\eta}_{\u03f5}(y)\phantom{\rule{0.2em}{0ex}}dy=0, we get
For the second term on the righthand side, noting that \u03f5<1, we have
Therefore, {\varphi}^{\u03f5}(x){\varphi}^{\u03f5}(0)\ge 0. □
Remark Since \varphi :R\to {R}^{+} is convex, increasing on {R}^{+} with linear growth at infinity, and {\varphi}^{\mathrm{\infty}}(1)={lim}_{s\to \mathrm{\infty}}\varphi (s)/s=b, for u\in \mathit{BV}(\mathrm{\Omega}), we can get the Lebesgue decomposition of the measure {\int}_{\mathrm{\Omega}}\varphi (Du)\phantom{\rule{0.2em}{0ex}}dx:
Since bs{M}^{2}/(1+M)\le \varphi (s)\le bs, for all s\in {R}^{N}, we have
The lower semicontinuity of the functional ϕ with respect to convergence in {L}^{1}(\mathrm{\Omega}), which is one of the most important properties of BV functions, is given in the next theorem below.
Theorem 3.2 Assume that {\{{u}_{i}\}}_{i=1}^{\mathrm{\infty}}\subset \mathit{BV}(\mathrm{\Omega}) and {u}_{i}\to u in {L}^{1}(\mathrm{\Omega}) as i\to \mathrm{\infty}, then
Proof Let v\in {C}_{0}^{1}(\mathrm{\Omega}) be such that v\le 1. Then
Taking the supremum over all such v, we have
Thus (8) follows on combining (7) and (9). □
Theorem 3.3 (Regularization)
Suppose u\in \mathit{BV}(\mathrm{\Omega}). If {u}^{\u03f5} are the mollified functions described in Section 2 (where u is extended to be 0 outside Ω if necessary), then
Proof Since {u}^{\u03f5}\to u as \u03f5\to 0 in {L}^{1}(\mathrm{\Omega}) from Theorem 2.2, we have, by Theorem 3.2, the inequality
and so it remains only to prove a verse inequality.
For u\in \mathit{BV}(\mathrm{\Omega}), by the method of the proof in Theorem A.2 of [10], we can obtain
Suppose v\in {C}_{0}^{1}(\mathrm{\Omega}) and v\le 1, then, by Theorem 2.2,
and
Thus
Since v\le 1, by Theorem 2.2, we get that {v}^{\u03f5}\le 1. On taking the supremum over all such v and noting (10), we get that
Thus
□
4 Some facts for the variational problem
Consider the variational problem. Assume that f\in {L}^{1}(\mathrm{\Omega}) with 0<{inf}_{\mathrm{\Omega}}f\le f\le {sup}_{\mathrm{\Omega}}f<\mathrm{\infty}, find a function \tilde{u}\in S(\mathrm{\Omega}) such that
where S(\mathrm{\Omega})=\{u\in \mathit{BV}(\mathrm{\Omega}):\mathrm{\nabla}u\in {L}^{1}{(\mathrm{\Omega})}^{2}\}.
For the existence of solutions for minimization problem (12), we have obtained the existence result in the BV space (see [5]), namely, for f\in {L}^{\mathrm{\infty}}(\mathrm{\Omega}) with {inf}_{\mathrm{\Omega}}f>0, problem (12) has at least one solution u\in \mathit{BV}(\mathrm{\Omega}) satisfying 0<{inf}_{\mathrm{\Omega}}f\le u\le {sup}_{\mathrm{\Omega}}f.
We find that, under appropriate assumptions, we can prove the existence of a Lipschitz solution for problem (12) as detailed below.
Theorem 4.1 (Existence of a Lipschitz solution)
Suppose that Ω is a bounded open domain in {R}^{N} and f\in {L}^{1}(\mathrm{\Omega}) with 0<{inf}_{\mathrm{\Omega}}f\le f\le {sup}_{\mathrm{\Omega}}f<\mathrm{\infty}. Then problem (12) has at least one Lipschitz solution if for any x\in \partial \mathrm{\Omega}, there exist two planes in {R}^{N+1}, z={\pi}^{+}(x) and z={\pi}^{}(x) such that

(i)
{\pi}^{}(x)\le f(x)\le {\pi}^{+}(x), \mathrm{\forall}x\in \partial \mathrm{\Omega};

(ii)
the slopes of these planes are uniformly bounded, independent of x, by a constant K. That is, D{\pi}^{\pm}\le K for all x\in \mathrm{\Omega}.
Proof Consider the approximation of problem (12), that is, replacing \varphi (s) by {\varphi}^{\u03f5}(s)+slog(1+s). By the general elliptic theory (see [8]), there exists a minimizer {u}_{\u03f5} for each \u03f5>0. The proof in Section 10 of [11] shows that {u}_{\u03f5} has a uniform gradient bound which is not greater than K, then, by applying to a subsequence of {u}_{\u03f5} as \u03f5\to 0, we get a Lipschitz function u, and the rest of the proof is straightforward. □
Theorem 4.2 (Uniqueness)
Let f>0 be in {L}^{\mathrm{\infty}}(\mathrm{\Omega}), then problem (12) has at most one solution \tilde{u} such that 0<\tilde{u}<2f.
Proof Letting \tau (u)=logu+f/u, we have {\tau}^{\prime}(u)=1/u+(uf)/{u}^{2} and {\tau}^{\u2033}(u)=(2fu)/{u}^{3}. We thus deduce that if 0<u<2f, then τ is strictly convex. Using (7) and noting that \varphi :R\to {R}^{+} is convex, increasing on {R}^{+} with linear growth at infinity, we obtain the uniqueness of a minimizer. □
Theorem 4.3 (Comparison principle)
Let {f}_{1} and {f}_{2} be in {L}^{\mathrm{\infty}}(\mathrm{\Omega}) with {inf}_{\mathrm{\Omega}}{f}_{1}>0 and {inf}_{\mathrm{\Omega}}{f}_{2}>0. Assume that {f}_{1}<{f}_{2} in Ω. We denote by {u}_{1} (resp., {u}_{2}) a solution of (12) for f={f}_{1} (resp., f={f}_{2}). Then we have {u}_{1}\le {u}_{2}.
Proof Let J(u)={\int}_{\mathrm{\Omega}}\varphi (Du). From Theorem 4 in [5], we know that {u}_{1} and {u}_{2} exist. Since {u}_{i} is a minimizer with data {f}_{i} (i=1,2), we have
and
Using the fact that J(inf({u}_{1},{u}_{2}))+J(sup({u}_{1},{u}_{2}))\le J({u}_{1})+J({u}_{2}) (see [12] or [13] for details), and adding these two inequalities (13) and (14), we obtain
Writing \mathrm{\Omega}=\{{u}_{1}>{u}_{2}\}\cup \{{u}_{1}\le {u}_{2}\}, we easily get that
this is,
Since {f}_{1}<{f}_{2}, we thus deduce that \{{u}_{1}>{u}_{2}\} has a zero Lebesgue measure; i.e., {u}_{1}\le {u}_{2} a.e. in Ω. □
Remark The above result agrees with observation in image processing. Suppose that {f}_{1} and {f}_{2} are two images with noise. {u}_{1} and {u}_{2} are from {f}_{1} and {f}_{2} after denoising by some method, respectively. If the image {f}_{1} is dimer than {f}_{2} at almost every pixel, then {u}_{1} is not naturally lighter than {u}_{2}.
5 Conclusion
In this paper, we study the mathematical properties of an important variational model proposed in our recent work [5] for removing multiplicative noise. For the first time, many important properties of the model, including the lower semicontinuity, the differential property, the convergence and regularization property are established and proved. The wellposedness of the underlying mathematical problem for the model has also been established. The comparison principle of solutions for the model has also been obtained.
References
Rudin L, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259–268. 10.1016/01672789(92)90242F
Rudin L, Lions PL, Osher S: Multiplicative denoising and deblurring: theory and algorithms. In Geometric Level Sets in Imaging, Vision, and Graphics. Edited by: Osher S, Paragios N. Springer, Berlin; 2003:103–119.
Aubert G, Aujol JF: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 2008, 68: 925–946. 10.1137/060671814
Jin ZM, Yang XP: A variational model to remove the multiplicative noise in ultrasound images. J. Math. Imaging Vis. 2011, 39: 62–74. 10.1007/s1085101002253
Hu XG, Zhang LT, Jiang W: Improved variational model to remove multiplicative noise based on partial differential equation. J. Comput. Appl. 2012, 32: 1879–1881.
Aubert G, Kornprobst P Appl. Math. Sci. 147. In Mathematical Problems in Image Processing. Springer, New York; 2002.
Chan TF, Esedoglu S:Aspects of total variation regularized {L}^{1} function approximation. SIAM J. Appl. Math. 2005, 65: 1817–1837. 10.1137/040604297
Lawrence CE: Partial Differential Equations. Am. Math. Soc., Providence; 1998.
Ambrosio L: A compactness theorem for a new class of functions of bounded variation. Boll. Unione Mat. Ital. 1989, 7: 857–881.
Hardt R, Kinderlehrer D: Elastic plastic deformation. Appl. Math. Optim. 1983, 10: 203–246. 10.1007/BF01448387
Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210
Chambolle A: An algorithm for mean curvature motion. Interfaces Free Bound. 2004, 6: 195–218.
Giusti E: Minimal Surfaces and Functions of Bounded Variation. Birkhäuser, Basel; 1994.
Acknowledgements
The first author is supported partly by the National Natural Science Foundation of China (11071266), and partly by the Chinese Scholarship Council during the author’s visit to Curtin University of Technology. The second author is supported by the Australian Research Council.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors completed the paper together. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Hu, X., Wu, Y.H. & Li, L. Analysis of a new variational model for image multiplicative denoising. J Inequal Appl 2013, 568 (2013). https://doi.org/10.1186/1029242X2013568
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X2013568