ISMP is over now and I’m already home. I do not have many things to report on from the last day. This is not due the lower quality of the talks but due to the fact that I was a little bit exhausted, as usual at the end of a five-day conference. However, I collect a few things for the record:

- In the morning I visited the semi-planary by Xiaojun Chenon non-convex and non-smooth minimization with smoothing methods. Not surprisingly, she treated the problem
with convex and smooth and . She proposed and analyzed smoothing methods, that is, to smooth the problem a bit to obtain a Lipschitz-continuous objective function , minimizing this and then gradually decreasing . This works, as she showed. If I remember correctly, she also treated “iteratively reweighted least squares” as I described in my previous post. Unfortunately, she did not include the generalized forward-backward methods based on -functions for non-convex functions. Kristian and I pursued this approach in our paper Minimization of non-smooth, non-convex functionals by iterative thresholding and some special features of our analysis include:

- A condition which excludes some (but not all) local minimizers from being global.
- An algorithm which avoids this non-global minimizers by carefully adjusting the steplength of the method.
- A result that the number of local minimizers is still finite, even if the problem is posed in and not in .

Most of our results hold true, if the -quasi-norm is replaced by functions of the form

with special non-convex , namely fulfilling a list of assumptions like

- for (infinite slope at ) and for (mild coercivity),
- strictly convex on and for ,
- for each there is such that for it holds that , and
- local integrability of some section of .

As one easily sees, -quasi-norms fulfill the assumptions and some other interesting functions as well (e.g. some with very steep slope at like ).

- Jorge Nocedalgave a talk on second-order methods for non-smooth problems and his main example was a functional like
with a convex and smooth , but different from Xiaojun Chen, he only considered the -norm. His talked is among the best planary talks I have ever attended and it was a great pleasure to listen to him. He carefully explained things and put them in perspective. In the case he skipped slides, he made me feel that I either did not miss an important thing, or understood them even though he didn’t show them He argued that it is not necessarily more expensive to use second order information in contrast to first order methods. Indeed, the -norm can be used to reduce the number of degrees of freedom for a second order step. What was pretty interesting is, that he advocated

*semismooth Newton methods*for this problem. Roland and I pursued this approach some time ago in our paper A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints and, if I remember correctly (my notes are not complete at this point), his family of methods included our ssn-method. The method Roland and I proposed worked amazingly well in the cases in which it converged but the method suffered from non-global convergence. We had some preliminary ideas for globalization, which we could not tune enough to retain the speed of the method, and abandoned the topic. Now, that the topic will most probably be revived by the community, I am looking forward to fresh ideas here.