Currently I am at the SIAM Imaging conference in Hong Kong. It’s a great conference with great people at a great place. I am pretty sure that this will be the only post from here, since the conference is quite intense. I just wanted to report on two ideas that have become clear here, although, they are both pretty easy and probably already widely known, but anyway:

**1. Non-convex + convex objective **

There are a lot of talks that deal with optimization problems of the form

Especially, people try to leverage as much structure of the functionals and as possible. Frequently, there arises a need to deal with non-convex parts of the objective, and indeed, there are several approaches around that deal in one way or another with non-convexity of or even . Usually, in the presence of an that is not convex, it is helpful if has favorable properties, e.g. that still is bounded from below, coercive or even convex again. A particularly helpful property is strong convexity of (i.e. stays convex even if you subtract from it). Here comes the simple idea: If you already allow to be non-convex, but only have a that is merely convex, but not strongly so, you can modify your objective to

for some . This will give you strong convexity of and an that is (often) theoretically no worse than it used to be. It appeared to me that this is an idea that Kristian Bredies told me already almost ten years ago and which me made into a paper (together with Peter Maaß) in 2005 which got somehow delayed and published no earlier than 2009.

**2. Convex-concave saddle point problems **

If your problem has the form

with some linear operator and both and are convex, it has turned out, that it is tremendously helpful for the solution to consider the corresponding saddle point formulation: I.e. using the convex conjugate of , you write

A class of algorithms, that looks like to Arrow-Hurwicz-method at first glance, has been sparked be the method proposed by Chambolle and Pock. This method allows and to be merely convex (no smoothness or strong convexity needed) and only needs the proximal operators for both and . I also worked on algorithms for slightly more general problems, involving a reformulation of the saddle point problem as a monotone inclusion, with Tom Pock in the paper An accelerated forward-backward algorithm for monotone inclusions and I also should mention this nice approach by Bredies and Sun who consider another reformulation of the monotone inclusion. However, in the spirit of the first point, one should take advantage of all the available structure in the problem, e.g. smoothness of one of the terms. Some algorithm can exploit smoothness of either or and only need convexity of the other term. An idea, that has been used for some time already, to tackle the case if , say, is a sum of a smooth part and a non-smooth part (and is not smooth), is, to dualize the non-smooth part of : Say we have with smooth , then you could write

and you are back in business, if your method allows for sums of convex functions in the dual. The trick got the sticky name “dual transportation trick” in a talk by Marc Teboulle here and probably that will help, that I will not forget it from now on…

May 13, 2014 at 8:54 pm

What about solving large scale LS problems which occurs many times in optimization of images.

May 14, 2014 at 1:55 am

Given that “LS” means “least squares”, and that least squares problems tend to be convex (e.g. if they are linear), then, in principle, all the first order methods should be applicable (although you need to code them by hand). But as is, the comment is much too vague to say anything more helpful.