A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image...

6
Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219 214 S S S e e e n n n s s s o o o r r r s s s & & & T T T r r r a a a n n n s s s d d d u u u c c c e e e r r r s s s © 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com A Fast Algorithm for Single Motion Image Deblurring 1, 2 Yong Zhong Liao, 1 Cai Zi Xing, 1 Xiang Hua He 1 School of Information Science and Engineering, Central South University, Changsha, Hunan, 410002 China 2 Department of Information Science and Engineering, Hunan First Normal University, Changsha, Hunan, 410205 China 1 Tel.: 086073182841011 E-mail: [email protected] Received: 15 April 2014 /Accepted: 15 May 2014 /Published: 31 May 2014 Abstract: The blurred image blind restoration is a difficult problem of image processing. The key is the estimation of the Point Spread Function and non-blind deconvolution algorithm. In this paper, we propose a fast robust algorithm based on radon transform-domain to determine the blur kernel function. Then the blurred images are restored by using a modified fast non-blind deconvolution method based on image prior. Compared with R. Fergus’ algorithm, the experiment results prove that the algorithm for a class of motion blurred image caused by the linear movement parallel to the lens has higher speed and good recovery effect. Copyright © 2014 IFSA Publishing, S. L. Keywords: Blur kernel function, Motion blur, Radon transforms, Blind restoration. 1. Introduction Camera shake or movement causes blurry photographs, especially when using telephoto lens or long shuttle speed. Removing the effects from an image is one of the primary problems in the field of image processing, mathematically, a very general model for the motion blurring of images is usually considered as a convolution process in the space domain: expression: y) n(x, + y) h(x, y) f(x, = y) g(x, , (1) where y) g(x, is the blurred image, y) f(x, is the sharp image, y) h(x, is the blur kernel (or point spread function), is the convolution operator, and y) n(x, the additive noise. It will also be useful to express the convolution in the frequency domain: v)) N(u, + v) v)H(u, F(u, = v) G(u, (2) The capital letters denote the two-dimensional Discrete Fourier Transforms. If the blur kernel is unknown and the problem is called a blind deconvolution problem, otherwise a non-blind deconvolution problem. It is well known that the non-blind deconvolution problem is an ill- conditioned problem for its sensitivity to noise, but single-image blind deconvolution is even more ill- posed because we must estimate both the PSF and sharp image. We know that the process of deconvolution is highly dependent on the estimation of the blur kernel, so exact estimation of blur kernel is crucial for the restoration of the blurred image. In the past, many researchers have been working on recovering sharp images from motion-blurred images. Fergus et al. use ensemble learning http://www.sensorsportal.com/HTML/DIGEST/P_RP_0135.htm

Transcript of A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image...

Page 1: A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image Deblurring 1, ... ucos θ sin(πin H(u, v) = + ... kernel. This regular structure is related

Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219

214

SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss

© 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com

A Fast Algorithm for Single Motion Image Deblurring

1, 2 Yong Zhong Liao, 1 Cai Zi Xing, 1 Xiang Hua He 1 School of Information Science and Engineering, Central South University,

Changsha, Hunan, 410002 China 2 Department of Information Science and Engineering, Hunan First Normal University,

Changsha, Hunan, 410205 China 1 Tel.: 086073182841011

E-mail: [email protected]

Received: 15 April 2014 /Accepted: 15 May 2014 /Published: 31 May 2014 Abstract: The blurred image blind restoration is a difficult problem of image processing. The key is the estimation of the Point Spread Function and non-blind deconvolution algorithm. In this paper, we propose a fast robust algorithm based on radon transform-domain to determine the blur kernel function. Then the blurred images are restored by using a modified fast non-blind deconvolution method based on image prior. Compared with R. Fergus’ algorithm, the experiment results prove that the algorithm for a class of motion blurred image caused by the linear movement parallel to the lens has higher speed and good recovery effect. Copyright © 2014 IFSA Publishing, S. L. Keywords: Blur kernel function, Motion blur, Radon transforms, Blind restoration. 1. Introduction

Camera shake or movement causes blurry

photographs, especially when using telephoto lens or long shuttle speed. Removing the effects from an image is one of the primary problems in the field of image processing, mathematically, a very general model for the motion blurring of images is usually considered as a convolution process in the space domain: expression:

y)n(x,+y)h(x,y)f(x,=y)g(x, ⊗ , (1)

where y)g(x, is the blurred image, y)f(x, is the

sharp image, y)h(x, is the blur kernel (or point

spread function), ⊗ is the convolution operator, and y)n(x, the additive noise. It will also be useful to

express the convolution in the frequency domain:

v))N(u,+v)v)H(u,F(u,=v)G(u, (2)

The capital letters denote the two-dimensional

Discrete Fourier Transforms. If the blur kernel is unknown and the problem is called a blind deconvolution problem, otherwise a non-blind deconvolution problem. It is well known that the non-blind deconvolution problem is an ill-conditioned problem for its sensitivity to noise, but single-image blind deconvolution is even more ill-posed because we must estimate both the PSF and sharp image.

We know that the process of deconvolution is highly dependent on the estimation of the blur kernel, so exact estimation of blur kernel is crucial for the restoration of the blurred image.

In the past, many researchers have been working on recovering sharp images from motion-blurred images. Fergus et al. use ensemble learning

http://www.sensorsportal.com/HTML/DIGEST/P_RP_0135.htm

Page 2: A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image Deblurring 1, ... ucos θ sin(πin H(u, v) = + ... kernel. This regular structure is related

Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219

215

to recover blur kernel while assuming a certain statistical distribution for natural image gradients [1]. A variational method is used to approximate the posterior distribution, and then Richardson-Lucy is used for deconvolution. Levin advocated Bayesian inference methods through which the image is marginalized from the optimization while estimating the unknown blur kernel [2]. Krishnan et al. take an ,f hMAP approach but with 1/ 2l l regularization

to predict the sharp image in a coarse-to-fine iteration to deblur the images [3]. Michal Dobe et al. propose a fast method to estimate the blur kernel which is based on computation of the power spectrum of the image gradient in the frequency domain [4]. Zoran D, et al. have proposed a method to learn patch priors and perform patch restoration [5]. These methods can lead to tremendous computations.

This paper proposes an improved method to estimate the blur kernel. It is assumed that the real blurred image is caused by combined action of line motion and noise. The assumption of the blur kernel is based on the following considerations: if the distance between the imaging system and the object is constant and image is blurred by camera motion, then the blur kernel indicate the trajectory of camera shake during exposure. Exposure time is generally short in real photograph, so we think the trajectory of camera shake is approximately linear.

2. Motion Blur Model Under the assumption above, the real motion blur

kernel can be approximated by a linear blur kernel, so it can be represented only by the motion length and the direction, therefore, y)h(x, is restricted to be a line segment as follows:

≤≤+∞≤≤∞

=θLx θ,xtg=y1/L,

xθ,-xtgy0,yxh

cos0),(

≠, (3)

where L is the motion length, and proportional to the relative velocity; θ is the motion direction. The Fourier transform of y)h(x, is represented by [6-7].

)sin

))sin

N

θv

M

ucos θπL(

N

θv

M

ucos θsin( πin

=v)H(u,+

+ (4)

The amplitude spectrum of v)H(u, is characterized by periodic zeroes. A regular pattern of parallel stripes appears. We can estimate the parameters L and θ from parallel stripes and be able to infer the blur kernel. However, in practice, the stripes are hardly visible or the stripes may be corrupted by other structures contained in the blurred image.

In this paper, we propose a robust algorithm based on Radon transform-domain to determine the angle and the length from a single photograph when the stripes are hardly visible.

3. Blur Kernel Estimation

As mentioned previously, a regular structure is contained within the spectrum of the linear blur kernel. This regular structure is related to the length and the direction of the blur kernel. Normally, the stripes are corrupted by noise. If we choose a proper filter and reduce the noise level and enhance parallel stripes of the power spectrum, the parallel stripes will be clearer. The distance between neighboring stripes is proportional to parameter L of blur kernel. The direction of the blur is perpendicular to the direction of such stripes. Therefore, if we are able to find this regular pattern, we would be able to infer the blur kernel. Sakano suggested an algorithm based on the Hough transform which can accurately and robustly estimate the PSF, even in low signal-to-noise ratio cases, but the method is computationally complex. Dobes et al. first computed the power spectrum of the image gradient and applied a band pass Butterworth filter to smooth the spectrum and remove the unwanted noise, then the blur parameters is found using Radon transform [4]. Motivated by this, this paper proposes a fast and robust algorithm, that is, the Ridgelet transform which is capable of linear expression, will be utilized to enhance the frequency spectral image with noises, then a robust algorithm based on Radon transform is used to estimate blur kernel.

3.1. The Radon Transform

The Radon transform of a signal y)f(x, is the

collection of integrals of y)f(x, along straight lines,

where the straight lines can be conveniently parameterized using the offset ρ and the orientation β . As the location of the lines are determined from

the line equation: y sinβ+xcosβρ = , so it can be expressed as [8]:

( ) ( )R ρ,β = g(x, y)δ xcosβ+ ysinβ ρ dxdy∞ ∞

−∞ −∞

− g, (5)

where β is the orientation of the radial line that we integrate over and ρ is the offset of that line from the origin of the coordinate, ( )⋅δ is the Dirac delta

function. We define ( )GR β ,ρ is radon transform

of ( , )G u v , then

( ) ( )GR β,ρ = G(u,v)δ ucosβ+vsinβ - ρ dudv∞ ∞

−∞ −∞ (6)

Page 3: A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image Deblurring 1, ... ucos θ sin(πin H(u, v) = + ... kernel. This regular structure is related

Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219

216

3.2. The Ridgelet Transform

Now we start by briefly reviewing on the continuous Ridgelet transform defined by Candès and Donoho in [9]. We have introduced the Radon transform, from which the continuous Ridgelet transform stems. The Continuous Ridgelet transform (CRT) is simply the application of a mono-dimensional wavelet to the slices of the Radon transform. The CRT of the frequency spectrum G (u, v) is expressed as [10]:

, ,( , , ) ( ) ( )G a bCRT a b θ ψ u,v G u,v dudvθ

∞ ∞

−∞ −∞= , (7)

where the Ridgelets , , ( )a bψ u,vθ in 2D are defined

from a wavelet-type function in 1D ( )ψ ⋅ as:

0.5, , , ,( , ) (( cos sin ) / )a b a bψ u v a ψ u θ v θ b aθ θ= + − (8)

The Ridgelet function is constant along the lines

of cos sinu θ v θ const+ = . Line singularities can be mapped to point singularities with Radon transform. Wavelets are expert in catching zero-dimensional or point singularities. The Ridgelet Transform is capable of linear expression. We use this capability to enhance the part of line in the frequency spectrum of the image [11].

( , ) *sgn( ( , ))( ( , ) )

( , ) , ( , )

0 ,

( , ) , ( , )

f f f

f f

f f

ZCRT CRT CRT

CRT CRT

CRT CRT

C i j C i j C i j

C i j C i j

otherwise

C i j C i j

γ χ

χ χ

χ χ

= −

+ < −

= − >

, (9)

where χ is the threshold value, γ is the enhancement coefficients. Two results are shown in Fig. 1.

Fig. 1. Frequency spectrogram of Ridgelet enhancement. 3.3. Blur Direction Estimation

In recent years many researches in natural image statistics have shown that, although images of real-world scenes vary greatly in their absolute color

distributions, their one-dimensional cross-sections of the logarithm of the power spectrum of the natural image is independent of angle of cross-sections [12-13], as shown in Fig. 2.

0 20 40 60 80 100 120 140 160 180 200-1

0

1

2

3

4

5

6

7

8

9

pixes

log1

0 of

pow

er s

pect

rum

45 degree cross-sections0 degree cross-sectionscurve-fitting

Fig. 2. One-dimensional cross-sections of the power spectrum (only the positive part is shown).

In fact, more thorough investigations have found that the log-power spectrum obey exponent

distributions. Namely, log ( , tan )F u u φ uζ

ε ,

where ,ε ζ are constants in the same image. We

define ( , ) log ( , )dG u v G u v= , ( , ) log ( , )dH u v H u v=

and ( , ) log ( , )dF u v F u v= . The shape the logarithm

of the power spectrum of the natural image along the line of tanv u φ= is independent of angle φ . According to the linear property of Radon transform and we can write the equation:

'

( , )[ ( , )] ( , )[ ( , )] ( , )[ ( , )]

( ( )[ ( , )]) ( ( )[ ( , )]) ( ( )[ ( , )])

( )[ ( , )] ( ( )[ ( , )])

( )[ ( , )] ( ( )[ ( , )]) ( )[ ( , )]

d d d

d d

d Gd

K K dd d

G d K d F d

G d K d r d

G i i

i i F i

R r θ G u v R r θ K u v R r θ F u v

E R θ G u v R θ K u v R θ F u v

R θ G u v R θ G u vd d

R θ K u v R θ G u v R θ F u vd d d

E E

E

E

= +

= +

= − +

( )[ ( , )] ( ( )[ ( , )])

( )[ ( , )] ( ( )[ ( , )])

K Kd d

d d

i i

F i F i

R θ K u v R θ K u vd d

R θ F u v R θ F u vd d

E

E

≤ −

+ −

(10)

As the value of ( , )dF u v is irrelevant

with angle θ , the second term of the Eqn. (10) is approximately constant. When the angle θ equals the direction of motion, there exists an extreme value in equation. We can define a new expression:

( ) ( )[ ( , )] ( ( )[ ( , )])d dG i d G i dQ θ R θ G u v R θ G u vE= − (11)

Thus, a reasonable estimation of the blur angle is

based on the following cost function.

1

1arg max ( )es Q

θθ θ =

(12)

Page 4: A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image Deblurring 1, ... ucos θ sin(πin H(u, v) = + ... kernel. This regular structure is related

Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219

217

3.3. Blur Length Estimation

Once we have θ, we can estimate the motion blur length of the blur kernel. The method of inferring the blur length is by identifying the distance of parallel stripes. We rotate the power spectrum by θ degree, and method [7] will be used to estimate the blur length.

4. Experiments of Kernel Estimation

To validate our algorithm, we tested our method on synthetic cases to estimate the blur angle and blur length. We considered several different motion blur lengths and motion blur angles to produce motion blur kernel, then we obtained an artificial blurred and noise images resulted from the convolution of a sharp image with a known shift-invariant motion blur kernel plus additive white Gaussian noise. The test sharp images 'Lena' 512×512 is chosen, the results are also briefly compared to the approach published in [14] as are shown in Fig. 3 and Fig. 4 (Algorithm I is our method).

0 10 20 30 40 50 60 70 800

5

10

15

20

25

30

35

angle

erro

r

Algorithm I

Algorithm II

Fig. 3. L=4; BSNR=10 error of θ.

0 5 10 15 20 250

1

2

3

4

5

6

7

length

erro

r

θ=10° Algorithm Iθ=30° Algorithm Iθ=60° Algorithm Iθ=80° Algorithm Iθ=10° Algorithm IIθ=30° Algorithm IIθ=60° Algorithm IIθ=80° Algorithm II

Fig. 4. Blur length estimation.

The experiments revealed that determining the length and angle of blur is more accurate when the power of the additive noise is large, and the relative

motion is small. From these results, it can be also seen the superiority of the present algorithm. Especially, in the case that power of noise is relatively large, that is, BSNR is relatively low, the estimation of performance of our algorithm is good.

5. Image Deblurring Once we have the motion blur kernel, we can use

a variety of non-blind deconvolution method to recover the clear image y)f(x, from the blurred

image y)g(x, , such as the simplest Richardson-Lucy [15], Total Variation regularization method [16] and so on. The disadvantage of RL is that this method is sensitive to an error estimation of kernel, which results in ringing artifacts in y)f(x, . As our method used a linear motion blur kernel to replace a real blur kernel, so it will lead to error estimation of blur kernel. Therefore, we applied this modified version of non-blind Total Variation deconvolution algorithm. 5.1. Total Variation Algorithm

One of the most successful regularization approaches is the Total Variation (TV) regularization method which can effectively recover edges of an image. The mathematical formulation used in the image recovery problem is stated as follow:

2

2min

fh f g f dxdy⊗ − + ∇λ

Ω (13)

For simplicity, where , ,f g h indicate ( , )f x y ,

( , )g x y , ( , )h x y , W is an open rectangular domain

in 2 , and f∇ is the gradient of the image,

λ is balancing weight. We rearrange the elements of the images f and g into column vectors.

TH is a linear blurring operator, xD and yD are horizontal and vertical discrete derivative operators, D is a gradient operator. We have:

( ) ( )1

22 2

21

min T x yfH f g D D− + +λ ,

(14)

where 2

2TH f g− is the fidelity term and

( ) ( )1

22

1x yD D+ is the regularization term. We define:

1

2

M*N

T

e

eH f g

e

− =

;

x

y

D fDf

D f

=

;

M*N

2

( )

( )

y

y1

y

y

fD f

y

∂ = = ∂

σ

σ

σ

;

Page 5: A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image Deblurring 1, ... ucos θ sin(πin H(u, v) = + ... kernel. This regular structure is related

Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219

218

x(M*N)

2( )

x1

x

x

fD f

x

∂ = = ∂

σ

σ

σ

; then

( ) ( )

( ) ( )

( ) ( )

2 2

y1 x1

122 2 2

1 2 21

1

2 2

( ) ( )

y x

y M*N x M*N

f ff

x x

+

∂ ∂ + ∇ = + = ∂ ∂

+

σ σ

σ σ σ

σ σ

(15)

5.2. Improved Total Variation Algorithm

The standard ROF model Eqn. (13) is well known to have certain limitations. One important issue is the loss of contrast in solutions even for noise-free observed images. Moreover, as our method used an approximate linear motion blur kernel to estimate the sharp image, it will produce an estimation error. In the meantime, in order to reduce noise, we

add a new regularization 2

NLM( )f f− , so we

minimize the following energy function:

( ) ( )

( ) ( )

NLM

*

22

T 2 2f f

122

x y1

minJ f =min H f - g + * f - f

+ D f + D f

κ

λ, (16)

where NLM( ) is the non-local means denoising operation [17], κ is the weight. We use the Iteratively Reweighted algorithm [18] to optimize the Eq. (15). We set

2

1,

0 ,

2 2xi yi z2

xi yiz

if σ +σσ +σW diag

otherwise

> =

ε,

where zε is the infinitesimal threshold value, we have:

1 2

2 22 21

2

1 2

Z

f

x y x y

∂ ∂ ∂ ∂ + = + ∂ ∂ ∂ ∂

f f fW (17)

We define ( ) ( 1)NLM ( )l lS f −= . According

to ( ) 0J f′ = , we have:

( T l -1 TT T Z Tf = H H D W D + I) (H g + IS)+ λ κ κ T (18)

Pseudo-code: 1: Initialize.

(0) T T -1 TT T Tf = (H H +lD D+ kI) (H )g

2: Repeat. 1) Update weighted coefficient matrix

2

1,

0 ,

2 2xi yi z2l

xi yiZ

if σ +σσ +σW diag

otherwise

> =

ε

where 1,2,3........l ∈

2) Update NLM( )f

N L M( l ) ( l - 1 )S = ( f )

3) Update solver (l) T T l -1 T (l)

T T Z Tf = (H H + D W D+ I) (H g + IS )λ κ κ

6. Experiments

In this section, we tested our method on synthetic

data and real data. As shown in Fig. 5. Our method is compared with Fergus’ method. Results show our method achieves equal or better accuracy than Fergus’ method and can effectively reduce the computation time.

(a) (d) (g) (j) (m)

(b) (e) (h) (k) (n)

(c) (f) (i) (l) (o)

Fig. 5. Results: (a) are the synthetic blurred image; (b), (c) real-world images; (d), (e), (f) are the deblurred images by the method in [1], (j), (k), (l) are the deblurred images by our method, (g), (h), (i) are the blur kernels estimated by the method in [1], (m), (n), (o) are the blur kernels estimated by our method.

7. Conclusions

This paper proposes a new single motion-blurred image blind deconvolution method that is more robust than Fergus’ method when the SNR is low, and the relative motion is small. Then we use a fast non-blind deconvolution algorithm based on prior of image gradient magnitudes that generates high-quality final results. Experiments reveal that our method achieves equal or better results than Fergus et al.'s and reduce the computation time.

Page 6: A Fast Algorithm for Single Motion Image · PDF fileA Fast Algorithm for Single Motion Image Deblurring 1, ... ucos θ sin(πin H(u, v) = + ... kernel. This regular structure is related

Sensors & Transducers, Vol. 171, Issue 5, May 2014, pp. 214-219

219

Acknowledgements

This work was supported by the National Science Foundation of China under Grant 61175064.

References [1]. Fergus R., Singh B., Hertzmann A., et al., Removing

camera shake from a single photograph, ACM Transactions on Graphics, 25, 3, 2006, pp. 787-794.

[2]. Levin A., Weiss Y., Durand F., et al., Efficient marginal likelihood optimization in blind deconvolution, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 2657-2664.

[3]. Krahmer F., Lin Y., McAdoo B., et al., Blind image deconvolution: Motion blur estimation, IMA Preprints Series, 2006, pp. 2133-2135.

[4]. Dobeš M., Machala L., Fürst T., Blurred image restoration: A fast method of finding the motion length and angle, Digital Signal Processing, 20, 6, 2010, pp. 1677-1686.

[5]. Zoran D., Weiss Y., From learning models of natural image patches to whole image restoration, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2011, pp. 479-486.

[6]. Sakano M., Suetake N., Uchino E., A PSF estimation based on Hough transform concerning gradient vector for noisy and motion blurred images, IEICE Transactions on Information and Systems, 90, 1, 2007, pp. 182-190.

[7]. Liao Y. Z., Cai Z. X., He X. H., Fast algorithm for motion blurred image blind deconvolution, Optics and Precision Engineering, 21, 10, 2013, pp. 2688-2695.

[8]. Toft P. A., Sørensen J. A., The Radon transform-theory and implementation, Technical University

of Denmark Tekniske Universitet, Department of Informatics and Mathematical Modeling Institut for Informatik Matematisk Modellering, 1996.

[9]. Candes E. J., Ridgelets: theory and applications, Stanford University, 1998.

[10]. Do M. N., Vetterli M., The finite ridgelet transform for image representation, IEEE Transactions on Image Processing, 12, 1, 2003, pp. 16-28.

[11]. Starck J. L., Murtagh F., Candes E. J., et al., Gray and color image contrast enhancement by the curvelet transform, IEEE Transactions on Image Processing, 12, 6, 2003, pp. 706-717.

[12]. Hyvèarinen A., Hurri J., Hoyer P. O., Natural Image Statistics, Springer, 2009.

[13]. Alfreds Carasso, Direct Blind Deconvolution, Siam J. Appl. Math., 61, 6, 2001, pp. 1980-2007.

[14]. Oliveira J. P., Figueiredo M. A. T., Bioucas-Dias J. M., Blind estimation of motion blur parameters for image deconvolution, Pattern Recognition and Image Analysis, Springer Berlin Heidelberg, 2007, pp. 604-611.

[15]. Richardson W. H., Bayesian-based iterative method of image restoration, Journal of the Optical Society of America, 62, 1, 1972, pp. 55-59.

[16]. Sroubek F., Milanfar P., Robust multichannel blind deconvolution via fast alternating minimization, IEEE Transactions on Image Processing, 21, 4, 2012, pp. 1687-1700.

[17]. Buades A., Coll B., Morel J. M., A non-local algorithm for image denoising, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR'05), 2, 2005, pp. 60-65.

[18]. Rodríguez P., Wohlberg B., A generalized vector-valued total variation algorithm, in Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), 2009, pp. 1309-1312.

___________________

2014 Copyright ©, International Frequency Sensor Association (IFSA) Publishing, S. L. All rights reserved. (http://www.sensorsportal.com)