Yesterday I uploaded the paper **An extended Perona-Malik model based on probabilistic models** by Lars Mescheder and myself to the arXiv. Recently I already blogged about this work, so I do not have to add that much. The main theme of the work was that if we have an image that is a blurred and noisy version of some true image, we formulate the reconstruction via Bayesian statistics. As a prior model for images we used a Gaussian scale mixture, i.e. we have a latent variable (in every pixel ) and the joint prior for the image and the latent variable is

where denotes the pixels, $\nabla$ is the discrete gradient of the image and is some non-negative function defined on the non-negative reals. Besides algorithms to approximate the MAP estimate, Lars proposed a mean-field approximation which does not calculate the most probable image, but iteratively approximates the posterior distribution by distributions which factorize over the variables and . Using some further approximation (since the resulting algorithm in its plain form is not implementable) one arrives at an algorithm which some keeps more uncertainty in and, in practice, gives a point estimate for the denoised image that seems more “representative”. Here is an example:

This is the blurred and noisy image (the blur is motions blur and implemented with periodic boundary conditions for simplicity):

The next image is the approximation of the MAP estimate we got and you see the usual drawbacks. Since the MAP neglects all uncertainty in and maximizes the posterior probability, the image is “too sharp” in the way that smooth transitions (e.g. at the lighthouse) turn into piecewise constant stairs. Also the rocks appear very blocky.

Here is the result from the mean-field approximation. Since uncertainty in is taken into account, the image does not suffer from staircasing and also has a more natural appeal, most prominently in the part with the rocks.

The paper has some more examples and also shows another relation to Mumford-Shah denoising (loosely speaking, one uses a discrete latent variable to serve as a binary variable to say if a pixel is an edge or not). Oh, by the way, the algorithms behind the mean-field approximation use some parts of more general duality between random variables that Lars and his co-authors develop in another paper.

### Like this:

Like Loading...

*Related*

April 27, 2017 at 9:02 am

It is very interesting to me ğŸ™‚ It seems that if $v=0$, the prior leads to the energy which has spatially-adaptive regularization. Could you explain a little more about the additional term of $v(z(x))$? (I just want to know understand it better)

April 27, 2017 at 10:01 am

Yes, is spatially adaptive edge weight. One can view as the prior for this edge weight. For example puts an exponential distribution on .