Currently I am at the SIAM Imaging conference in Hong Kong. It’s a great conference with great people at a great place. I am pretty sure that this will be the only post from here, since the conference is quite intense. I just wanted to report on two ideas that have become clear here, although, they are both pretty easy and probably already widely known, but anyway:

1. Non-convex + convex objective

There are a lot of talks that deal with optimization problems of the form

$\displaystyle \min_u F(u) + G(u).$

Especially, people try to leverage as much structure of the functionals ${F}$ and ${G}$ as possible. Frequently, there arises a need to deal with non-convex parts of the objective, and indeed, there are several approaches around that deal in one way or another with non-convexity of ${F}$ or even ${F+G}$. Usually, in the presence of an ${F}$ that is not convex, it is helpful if ${G}$ has favorable properties, e.g. that still ${F+G}$ is bounded from below, coercive or even convex again. A particularly helpful property is strong convexity of ${G}$ (i.e. ${G}$ stays convex even if you subtract ${\epsilon/2\|\cdot\|^2}$ from it). Here comes the simple idea: If you already allow ${F}$ to be non-convex, but only have a ${G}$ that is merely convex, but not strongly so, you can modify your objective to

$\displaystyle \underbrace{F(u) - \tfrac\epsilon2\|u\|^2}_{\leftarrow F(u)} + \underbrace{G(u) + \tfrac\epsilon2\|u\|^2}_{\leftarrow G(u)}$

for some ${\epsilon>0}$. This will give you strong convexity of ${G}$ and an ${F}$ that is (often) theoretically no worse than it used to be. It appeared to me that this is an idea that Kristian Bredies told me already almost ten years ago and which me made into a paper (together with Peter Maaß) in 2005 which got somehow delayed and published no earlier than 2009.

If your problem has the form

$\displaystyle \min_u F(u) + G(Ku)$

with some linear operator ${K}$ and both ${F}$ and ${G}$ are convex, it has turned out, that it is tremendously helpful for the solution to consider the corresponding saddle point formulation: I.e. using the convex conjugate ${G^*}$ of ${G}$, you write

$\displaystyle \min_u \max_v F(u) + \langle Ku, v\rangle -G^*(v).$

A class of algorithms, that looks like to Arrow-Hurwicz-method at first glance, has been sparked be the method proposed by Chambolle and Pock. This method allows ${F}$ and ${G}$ to be merely convex (no smoothness or strong convexity needed) and only needs the proximal operators for both ${F}$ and ${G^*}$. I also worked on algorithms for slightly more general problems, involving a reformulation of the saddle point problem as a monotone inclusion, with Tom Pock in the paper An accelerated forward-backward algorithm for monotone inclusions and I also should mention this nice approach by Bredies and Sun who consider another reformulation of the monotone inclusion. However, in the spirit of the first point, one should take advantage of all the available structure in the problem, e.g. smoothness of one of the terms. Some algorithm can exploit smoothness of either ${F}$ or ${G^*}$ and only need convexity of the other term. An idea, that has been used for some time already, to tackle the case if ${F}$, say, is a sum of a smooth part and a non-smooth part (and ${G^*}$ is not smooth), is, to dualize the non-smooth part of ${F}$: Say we have ${F = F_1 + F_2}$ with smooth ${F_1}$, then you could write

$\displaystyle \begin{array}{rcl} &\min_u\max_v F_1(u) + F_2(u) + \langle Ku, v\rangle -G^*(v)\\ & \qquad= \max_u \min_{v,w} F_1(u) + \langle u,w\rangle + \langle Ku, v\rangle -G^*(v) - F_2^*(w) \end{array}$

and you are back in business, if your method allows for sums of convex functions in the dual. The trick got the sticky name “dual transportation trick” in a talk by Marc Teboulle here and probably that will help, that I will not forget it from now on…