Download - Gaussian KD tree filtering (reading group presentation)

Transcript

Gaussian KD-Trees for Fast HighDimensional FilteringAndrew Adams, Natasha Gelfand,Jennifer Dolson & Marc Levoy

Reading group presentation September 2009

Bilateral Filters

p's are the pixel locations v's are the colour values at those locations 's are the standard deviations

Bilateral filtering for gray and color images, Computer Vision 98

Unit- axes

We often want different standard deviations along different axes (p, c ) Can remove them from the bilateral equation by premultiplying the data

General Bilateral Filters

We can simplify to this by

Treating p as a vector of location and colour Pre-multiplying p such that is accounted for

ExpensiveCalculating every pixel in an image uses every other pixel. O (n d)2

(for n data points of dimension d)

Bilateral Grid

Real-time Edge-Aware Image Processing with the Bilateral Grid; Chen, Paris & Durand, SIGG07

But as we reduce the dimensions...

Can't only use luminance, should use colours too.Paris & Durand, 2006, A Fast Approximation of the Bilateral Filter using a Signal Processing Approach, IJCV

Three tricks we can use:

Ignore small things that happen a long way away

Sphere intersections KD trees Importance sample of the KD tree Gaussian filtering can be computed at a lower resolution and interpolated.

So many samples, they won't notice if we miss a few

Blurry things all look the same

Three tricks we can use:

Ignore small things that happen a long way away

Sphere intersections KD trees Importance sampling of the KD tree Gaussian filtering can be computed at a lower resolution and interpolated.

So many samples, they won't notice if we miss a few

Blurry things all look the same

KD-treesSpace subdivision technique

http://rimlab.ce.unipr.it/RIMLabImages/courses/robotica/AA0405/RizziniGerelli/kd_tree.png

http://picasaweb.google.com/lh/photo/ojF_g0HHrpfqwZd7JMHwuA

< s?

Final blur radius is then

2 2 s

2 b

We consider the sampling dense enough when the maximum spacing between data points in the reduced space is s We typically set b = 3s And the final blur radius should be one, since we scaled the values earlier.

Monte Carlo Gaussian approximation

Use a finite number of samples to approximate the Gaussian Used for splatting, blurring and slicing stages

Query the KD tree specifying a location, and a number of samples

Sample count decides portions of tree that are evaluated

Gaussian integral divided at each internal node Only those leaf nodes that samples reach are evaluated Returns a weight for each leaf node

Performance

Tree construction O(nd log m) Single Gaussian query runs in O (s (log m+d)) Scatter O(sn(log m+d)) Blur O(sm(log m+d)) Slide O(sn(log m+d)) Total O(dn log n)

(n, d dimensional data points, subsampled to m, sampled s times)

GPU implementation

KD tree on GPU Zhou et al. 2008 No recursion, implemented iteratively Exploit parallelism for point casting

Non-Local means smoothing

How about we look all over the image for pixels to average over? Weighted using most similar neighbourhoods

A non-local algorithm for image denoising, Buades et. al., CVPR 05

Too many dimensions

Dimension for each value in neighbourhood If dimension >> log (m), chances are we won't make a decision based on that dimension when evaluating the tree. Use PCA over all patches to choose a subset of dimensions tree split on dimensions of highest variance

input

smoothed

non-local

output

bilateral

Criticism

Not a true Gaussian filter Three small filter approximations will not transfer energy between remote regions at all, while one big filter will Statistical!