Today there are several things I could blog on. The first is the planary by Rich Baraniuk on Compressed Sensing. However, I don’t think that I could reflect the content in a way which would be helpful for a potential reader. Just for the record: If you have the chance to visit one of Rich’s talk: Do it!

The second thing is the talk by Bernd Hofmann on source conditions, smoothness and variational inequalities and their use in regularization of inverse problems. However, this would be too technical for now and I just did not take enough notes to write a meaningful post.

As a third thing I have the talk by Christian Clason on inverse problems with uniformly distributed noise. He argued that for uniform noise it is much better to use an discrepancy term instead of the usual -one. He presented a path-following semismooth Newton method to solve the problem

and showed examples with different kinds of noise. Indeed the examples showed that works much better than here. But in fact it works even better, if the noise is not uniformly distributed but “impulsive” i.e. it attains bounds almost everywhere. It seems to me that uniform noise would need a slightly different penalty but I don’t know which one – probably you do? Moreover, Christian presented the balancing principle to choose the regularization parameter (without knowledge about the noise level) and this was the first time I really got what it’s about. What one does here is, to choose such that (for some which only depends on , but not on the noise)

The rational behind this is, that the left hand side is monotonically non-decreasing in , while the right hand side is monotonically non-increasing. Hence, there should be some “in the middle” which make both somewhat equally large. Of course, we do neither want to “over-regularize” (which would usually “smooth too much”) nor to “under-regularize” (which would not eliminate noise). Hence, balancing seems to be a valid choice. From a practical point of view the balancing is also nice because one can use the fixed-point iteration

which converges in a few number of iterations.

Then there was the talk by Esther Klann, but unfortunately, I was late so only heard the last half…

Last but not least we have the talk by Christiane Pöschl. If you are interested in Total-Variation-Denoising (TV denoising), then you probably have heard many times that “TV denoising preserves edges” (have a look at the Wikipedia page – it claims this twice). What Christiane showed (in a work with Vicent Caselles and M. Novaga) that this claim is not true in general but only for very special cases. In case of characteristic functions, the only functions for which the TV minimizer has sharp edges are these so-called calibrated sets, introduced by Caselles et el. Building on earlier works by Caselles and co-workers she calculated *exact minimizers* for TV denoising in the case that the image consists of characteristic functions of two convex sets or of a single star shaped domain, that is, for a given set she calculated the solution of

This is not is as easy as it may sound. Even for the minimizer for a single convex set one has to make some effort. She presented a nice connection of the shape of the obtained level-sets with the morphological operators of closing and opening. With the help of this link she derived a methodology to obtain the exact TV denoising minimizer for all parameters. I do not have the images right now but be assured that most of the time, the minimizers do *not* have sharp edges all over the place. Even for simple geometries (like two rectangles touching in a corner) strange things happen and only very few sharp edges appear. I’ll keep you posted in case the paper comes out (or appears as a preprint).

Christiane has some nice images which make this much more clear:

For two circles edges are preserved if they are far enough away from each other. If they are close, the area “in between” them is filled and, moreover, obey this fuzzy boundary. I remember myself seeing effects like this in the output of TV-solvers and thinking “well, it seems that the algorithm is either not good or not converged yet – TV should output sharp edges!”.

For a star-shaped shape (well, actually a star) the output looks like this. The corners are not only rounded but also blurred and this is true both for the “outer” corners and the “inner” corners.

So, if you have any TV-minimizing code, go ahead and check if your code actually does the right things on images like this!

Moreover, I would love to see similar results for more complicated extensions of TV like Total Generalized Variation, I treated here.

August 26, 2012 at 8:40 pm

Regarding the total variation talk by Christiane Pöschl: you might want to add that the exact minimizers presented are minimizers for the isotropic ROF model. I would assume that the considered effects do not appear in case of anisotropic ROF. Anyways, very interesting topic. I look forward to the paper.

August 26, 2012 at 9:13 pm

Of course, the isotropic total variation is meant here. But why do you think that the effect should not appear for isotropic variants? By the way: Are the calibrated sets for isotropic TV norms known?

August 27, 2012 at 5:26 pm

I thought so because isotropic and anisotropic tv behave very differently in many ways, not just in terms of exact solutions of the rof model. But I might be wrong with my assumption. I made some computational tests with Christianes images and the effects seem to appear as well, however in a more anisotropic fashion, of course. But I have to check again, especially on different boundary conditions. Anyways, the more I think about it, it might not be unusual that ROF behaves like this. And it is nice to have some analytical examples on that.

September 24, 2017 at 6:20 pm

Where one could find the talk presentation?

Thank You.

September 24, 2017 at 7:45 pm

I don’t think that the presentation is online anywhere. You may ask Christiane directly…

August 31, 2012 at 5:30 pm

It seems to me, that TGV behaves differently. While my TV implementation does this: http://d.pr/i/jjx0, my TGV implementation (second order, I used the primal-dual algorithm from [1]) does that: http://d.pr/i/trXt.

I took the images from your example and tried a bit until I figured out that lambda=3 gives me comparable results. Setting the same value for the TGV problem but chosing a small value (like 1/10) for the second derivative term got me those results.

It would be interesting to see, if this really holds theoretically.

[1] Antonin Chambolle and Thomas Pock. A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2010.

August 31, 2012 at 9:46 pm

Interesting. The “darkening” of the background shouldn’t be there (I think it is due to the finite domain on which you work, Christianes results are obtain on the whole plane and hence, the “white background” does not change color.