### August 2012

Monthly Archive

August 30, 2012

It’s out! Our department has a vacant position for optimization to fill! In particular we are looking for somebody working in continuous (nonlinear) optimization. Well, I know there have been a number of open positions with a similar scope recently, but I also know there are plenty of excellent people working in this field.

In addition to the official advertisement (which can be found here (from the website of TU Braunschweig) or here (from academics.de)), here is some further advertisement: The math department here is a medium sized department. It covers quite broad range of mathematics:

- Numerical Linear Algebra (Fassbender, Bollhöfer)
- PDEs (Sonar, Hempel)
- Modelling (Langemann)
- Stochastics (Kreiss, Lindner, Aurzada)
- Applied Analysis/Mathematical Physics (Bach, myself)
- Algebra and Discrete Mathematics (Eick, Löwen, Opolka)

and, of course, Optimization (Zimmermann) – in fact, I usually find some expert around for all the questions I have which are a bit outside my field. All groups are active and (as far as I can see) working together smoothly. The department is located in the Carl-Friedrich Gauss Faculty which is also the home of the departments for Computer Science, Business Administration and Social Sciences. At the least in Computer Science and Business Administration there are some mathematically oriented groups, e.g.

and there are several groups with some mathematical background and interesting fields of applications (computer graphics, robotics,…). Moreover, the TU has a lot of engineering institutes with strong background in mathematics and cool applications.

In addition to a lively and interesting research environment, the university treats its staff well (as far as I can see) and administrative burden or failures are not harming too much (in fact less then at other places, I’ve heard)!

In case you have any questions concerning the advertisement, feel free to ask (in addition to the head of the search committee, Jens-Peter Kreiss) me.

Deadline for application is **October 14th 2012**.

### Like this:

Like Loading...

August 25, 2012

ISMP is over now and I’m already home. I do not have many things to report on from the last day. This is not due the lower quality of the talks but due to the fact that I was a little bit exhausted, as usual at the end of a five-day conference. However, I collect a few things for the record:

- In the morning I visited the semi-planary by Xiaojun Chenon non-convex and non-smooth minimization with smoothing methods. Not surprisingly, she treated the problem
with convex and smooth and . She proposed and analyzed smoothing methods, that is, to smooth the problem a bit to obtain a Lipschitz-continuous objective function , minimizing this and then gradually decreasing . This works, as she showed. If I remember correctly, she also treated “iteratively reweighted least squares” as I described in my previous post. Unfortunately, she did not include the generalized forward-backward methods based on -functions for non-convex functions. Kristian and I pursued this approach in our paper Minimization of non-smooth, non-convex functionals by iterative thresholding and some special features of our analysis include:

- A condition which excludes some (but not all) local minimizers from being global.
- An algorithm which avoids this non-global minimizers by carefully adjusting the steplength of the method.
- A result that the number of local minimizers is still finite, even if the problem is posed in and not in .

Most of our results hold true, if the -quasi-norm is replaced by functions of the form

with special non-convex , namely fulfilling a list of assumptions like

- for (infinite slope at ) and for (mild coercivity),
- strictly convex on and for ,
- for each there is such that for it holds that , and
- local integrability of some section of .

As one easily sees, -quasi-norms fulfill the assumptions and some other interesting functions as well (e.g. some with very steep slope at like ).

- Jorge Nocedalgave a talk on second-order methods for non-smooth problems and his main example was a functional like
with a convex and smooth , but different from Xiaojun Chen, he only considered the -norm. His talked is among the best planary talks I have ever attended and it was a great pleasure to listen to him. He carefully explained things and put them in perspective. In the case he skipped slides, he made me feel that I either did not miss an important thing, or understood them even though he didn’t show them He argued that it is not necessarily more expensive to use second order information in contrast to first order methods. Indeed, the -norm can be used to reduce the number of degrees of freedom for a second order step. What was pretty interesting is, that he advocated *semismooth Newton methods* for this problem. Roland and I pursued this approach some time ago in our paper A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints and, if I remember correctly (my notes are not complete at this point), his family of methods included our ssn-method. The method Roland and I proposed worked amazingly well in the cases in which it converged but the method suffered from non-global convergence. We had some preliminary ideas for globalization, which we could not tune enough to retain the speed of the method, and abandoned the topic. Now, that the topic will most probably be revived by the community, I am looking forward to fresh ideas here.

### Like this:

Like Loading...

August 23, 2012

Today there are several things I could blog on. The first is the planary by Rich Baraniuk on Compressed Sensing. However, I don’t think that I could reflect the content in a way which would be helpful for a potential reader. Just for the record: If you have the chance to visit one of Rich’s talk: Do it!

The second thing is the talk by Bernd Hofmann on source conditions, smoothness and variational inequalities and their use in regularization of inverse problems. However, this would be too technical for now and I just did not take enough notes to write a meaningful post.

As a third thing I have the talk by Christian Clason on inverse problems with uniformly distributed noise. He argued that for uniform noise it is much better to use an discrepancy term instead of the usual -one. He presented a path-following semismooth Newton method to solve the problem

and showed examples with different kinds of noise. Indeed the examples showed that works much better than here. But in fact it works even better, if the noise is not uniformly distributed but “impulsive” i.e. it attains bounds almost everywhere. It seems to me that uniform noise would need a slightly different penalty but I don’t know which one – probably you do? Moreover, Christian presented the balancing principle to choose the regularization parameter (without knowledge about the noise level) and this was the first time I really got what it’s about. What one does here is, to choose such that (for some which only depends on , but not on the noise)

The rational behind this is, that the left hand side is monotonically non-decreasing in , while the right hand side is monotonically non-increasing. Hence, there should be some “in the middle” which make both somewhat equally large. Of course, we do neither want to “over-regularize” (which would usually “smooth too much”) nor to “under-regularize” (which would not eliminate noise). Hence, balancing seems to be a valid choice. From a practical point of view the balancing is also nice because one can use the fixed-point iteration

which converges in a few number of iterations.

Then there was the talk by Esther Klann, but unfortunately, I was late so only heard the last half…

Last but not least we have the talk by Christiane Pöschl. If you are interested in Total-Variation-Denoising (TV denoising), then you probably have heard many times that “TV denoising preserves edges” (have a look at the Wikipedia page – it claims this twice). What Christiane showed (in a work with Vicent Caselles and M. Novaga) that this claim is not true in general but only for very special cases. In case of characteristic functions, the only functions for which the TV minimizer has sharp edges are these so-called calibrated sets, introduced by Caselles et el. Building on earlier works by Caselles and co-workers she calculated *exact minimizers* for TV denoising in the case that the image consists of characteristic functions of two convex sets or of a single star shaped domain, that is, for a given set she calculated the solution of

This is not is as easy as it may sound. Even for the minimizer for a single convex set one has to make some effort. She presented a nice connection of the shape of the obtained level-sets with the morphological operators of closing and opening. With the help of this link she derived a methodology to obtain the exact TV denoising minimizer for all parameters. I do not have the images right now but be assured that most of the time, the minimizers do *not* have sharp edges all over the place. Even for simple geometries (like two rectangles touching in a corner) strange things happen and only very few sharp edges appear. I’ll keep you posted in case the paper comes out (or appears as a preprint).

Christiane has some nice images which make this much more clear:

For two circles edges are preserved if they are far enough away from each other. If they are close, the area “in between” them is filled and, moreover, obey this fuzzy boundary. I remember myself seeing effects like this in the output of TV-solvers and thinking “well, it seems that the algorithm is either not good or not converged yet – TV should output sharp edges!”.

For a star-shaped shape (well, actually a star) the output looks like this. The corners are not only rounded but also blurred and this is true both for the “outer” corners and the “inner” corners.

So, if you have any TV-minimizing code, go ahead and check if your code actually does the right things on images like this!

Moreover, I would love to see similar results for more complicated extensions of TV like Total Generalized Variation, I treated here.

### Like this:

Like Loading...

August 22, 2012

Today I report on two things I came across here at ISMP:

- The first is a talk by Russell Luke on
**Constraint qualifications for nonconvex feasibility problems**. Luke treated the NP-hard problem of sparsest solutions of linear systems. In fact he did not tackle this problem but the problem to find an -sparse solution of an system of equations. He formulated this as a feasibility-problem (well, Heinz Bauschke was a collaborator) as follows: With the usual malpractice let us denote by the number of non-zero entries of . Then the problem of finding an -sparse solution to is:
In other words: find a feasible point, i.e. a point which lies in the intersection of the two sets. Well, most often feasibility problems involve convex sets but here, the first one given by this “-norm” is definitely not convex. One of the simplest algorithms for the convex feasibility problem is to alternatingly project onto both sets. This algorithm dates back to von Neumann and has been analyzed in great detail. To make this method work for non-convex sets one only needs to know how to project onto both sets. For the case of the equality constraint one can use numerical linear algebra to obtain the projection. The non-convex constraint on the number of non-zero entries is in fact even easier: For the projection onto consists of just keeping the largest entries of while setting the others to zero (known as the “best -term approximation”). However, the theory breaks down in the case of non-convex sets. Russell treated problem in several papers (have a look at his publication page) and in the talk he focused on the problem of constraint qualification, i.e. what kind of regularity has to be imposed on the intersection of the two sets. He could shows that (local) linear convergence of the algorithm (which is observed numerically) can indeed be justified theoretically. One point which is still open is the phenomenon that the method seems to be convergent regardless of the initialization and that (even more surprisingly) that the limit point seems to be independent of the starting point (and also seems to be robust with respect to overestimating the sparsity ). I wondered if his results are robust with respect to inexact projections. For larger problems the projection onto the equality constraint are computationally expensive. For example it would be interesting to see what happens if one approximates the projection with a truncated CG-iteration as Andreas, Marc and I did in our paper on subgradient methods for Basis Pursuit.

- Joel Tropp reported on his paper Sharp recovery bounds for convex deconvolution, with applications together with Michael McCoy. However, in his title he used
*demixing* instead of deconvolution (which, I think, is more appropriate and leads to less confusion). With “demixing” they mean the following: Suppose you have two signals and of which you observe only the superposition of and a unitarily transformed , i.e. for a unitary matrix you observe
Of course, without further assumptions there is no way to recover and from the knowledge of and . As one motivation he used the assumption that both and are sparse. After the big bang of compressed sensing nobody wonders that one turns to convex optimization with -norms in the following manner:

This looks a lot like sparse approximation: Eliminating one obtains the unconstraint problem \begin{equation*} \min_y \|z_0-Uy\|_1 + \lambda \|y\|_1. \end{equation*}

Phrased differently, this problem aims at finding an approximate sparse solution of such that the residual (could also say “noise”) is also sparse. This differs from the common Basis Pursuit Denoising (BPDN) by the structure function for the residual (which is the squared -norm). This is due to the fact that in BPDN one usually assumes Gaussian noise which naturally lead to the squared -norm. Well, one man’s noise is the other man’s signal, as we see here. Tropp and McCoy obtained very sharp thresholds on the sparsity of and which allow for *exact* recovery of both of them by solving (1). One thing which makes their analysis simpler is the following reformulation: The treated the related problem \begin{equation*} \min_{x,y} \|x\|_1 \text{such that} \|y\|_1\leq\alpha, x+Uy=z_0 \end{equation*} (which I would call this the Ivanov version of the Tikhonov-problem (1)). This allows for precise exploitation of prior knowledge by assuming that the number is known.

First I wondered if this reformulation was responsible for their unusual sharp results (sharper the results for exact recovery by BDPN), but I think it’s not. I think this is due to the fact that they have this strong assumption on the “residual”, namely that it is sparse. This can be formulated with the help of -norm (which is “non-smooth”) in contrast to the smooth -norm which is what one gets as prior for Gaussian noise. Moreover, McCoy and Tropp generalized their result to the case in which the structure of and is formulated by two functionals and , respectively. Assuming a kind of non-smoothness of and the obtain the same kind of results and especially matrix decomposition problems are covered.

### Like this:

Like Loading...

August 21, 2012

The second day of ISMP started (for me) with the session I organized and chaired.

The first talk was by Michael Goldman on **Continuous Primal-Dual Methods in Image Processing**. He considered the continuous Arrow-Hurwitz method for saddle point problems

with convex in the first and concave in the second variable. The continuous Arrow-Hurwitz method consists of solving

His talk evolved around the problem if comes from a functional which contains the total variation, namely he considered

with the additional constraints and . For the case of he presented a nice analysis of the problem including convergence of the method to a solution of the primal problem and some a-posteriori estimates. This reminded me of Showalters method for the regularization of ill-posed problems. The Arrow-Hurwitz method looks like a regularized version of Showalters method and hence, early stopping does not seem to be necessary for regularization. The related paper is Continuous Primal-Dual Methods for Image Processing.

The second talk was given by Elias Helou and was on **Incremental Algorithms for Convex Non-Smooth Optimization with Applications in Image Reconstructions**. He presented his work on a very general framework for problems of the class

with a convex function and a convex set . Basically, he abstracted the properties of the projected subgradient method. This consists of taking subgradient descent steps for followed by projection onto iteratively: With a subgradient this reads as

he extracted the conditions one needs from the subgradient descent step and from the projection step and formulated an algorithm which consists of successive application of an “optimality operator” (replacing the subgradient step) and a feasibility operator (replacing the projection step). The algorithm then reads as

and he showed convergence under the extracted conditions. The related paper is , Incremental Subgradients for Constrained Convex Optimization: a Unified Framework and New Methods.

The third talk was by Jerome Fehrenbach on **Stripes removel in images, apllications in microscopy**. He considered the problem of very specific noise which is appear in the form of stripes (and appears, for example, “single plane illumination microscopy”). In fact he considered a little more general case and the model he proposed was as follows: The observed image is

i.e. the usual sum of the true image and noise . However, for the noise he assumed that it is given by

i.e. it is a sum of different convolutions. The are kind of shape-functions which describe the “pattern of the noise” and the are samples of noise processes, following specific distributions (could be white noise realizations, impulsive noise or something else)-. He then formulated a variational method to identify the variables which reads as

Basically, this is the usual variational approach to image denoising, but nor the optimization variable is the *noise* rather than the *image*. This is due to the fact that the noise has a specific complicated structure and the usual formulation with is not feasible. He used the primal-dual algorithm by Chambolle and Pock for this problem and showed that the method works well on real world problems.

Another theme which caught my attention here is “optimization with variational inequalities as constraints”. At first glance that sounds pretty awkward. Variational inequalities can be quite complicated things and why on earth would somebody considers these things as side conditions in optimization problems? In fact there are good reasons to do so. One reason is, if you have to deal with bi-level optimization problems. Consider an optimization problem

with convex and (omitting regularity conditions which could be necessary to impose) depending on a parameter . Now consider the case that you want to choose the parameter in an optimal way, i.e. it solves another optimization problem. This could look like

Now you have an optimization problem as a constraint. Now we use the optimality condition for the problem~(1): For differentiable , solves~(1) if and only if

In other words: We con reformulate (2) as

And there it is, our optimization problem with a variational inequality as constraint. Here at ISMP there are entire sessions devoted to this, see here and here.

### Like this:

Like Loading...

August 20, 2012

The scientific program at ISMP started today and I planned to write a small personal summary of each day. However, it is a very intense meeting. Lot’s of excellent talks, lot’s of people to meet and little spare time. So I’m afraid that I have to deviate from my plan a little bit. Instead of a summary of every day I just pick out a few events. I remark that these picks do not reflect quality, significance or something like this in any way. I just pick things for which I have something to record for personal reasons.

My day started after the first plenary which the session Testing environments for machine learning and compressed sensing in which my own talk was located. The session started with the talk by Michael Friedlander of the SPOT toolbox. Haven’t heard of SPOT yet? Take a look! In a nutshell its a toolbox which turns MATLAB into “OPLAB”, i.e. it allows to treat abstract *linear operators* like *matrices*. By the way, the code is on github.

The second talk was by Katya Scheinberg (who is giving a semi-planary talk on derivative free optimization at the moment…). She talked about speeding up FISTA by cleverly adjusting step-sizes and over-relaxation parameters and generalizing these ideas to other methods like alternating direction methods. Notably, she used the “SPEAR test instances” from our project homepage! (And credited them as “surprisingly hard sparsity problems”.)

My own talk was the third and last one in that session. I talked about the issue of constructing test instance for Basis Pursuit Denoising. I argued that the naive approach (which takes a matrix , a right hand side and a parameter and let some great solver run for a while to obtain a solution ) may suffer from “trusted method bias”. I proposed to use “reverse instance construction” which is: First choose , *and the solution * and the *construct the right hand side * (I blogged on this before here).

Last but not least, I’d like to mention the talk by Thomas Pock: He talked about parameter selection on variational models (think of the regularization parameter in Tikhonov, for example). In a paper with Karl Kunisch titled A bilevel optimization approach for parameter learning in variational models they formulated this as a bi-level optimization problem. An approach which seemed to have been overdue! Although they treat somehow simple inverse problems (well, denoising) (but with not so easy regularizers) it is a promising first step in this direction.

### Like this:

Like Loading...

August 20, 2012

Today I arrived at ISMP in Berlin. This seems to be the largest conference on optimization and is hosted by the Mathematical Optimization Society. (As a side note: The society recently changed its name from MPS (Mathematical Programming Society) to MOS. Probably the conference will be called ISMO in a few years…).

The reception today was special in comparison to conference receptions I have attended so far. First, it was held in the Konzerthaus which is a pretty fancy neo-classical building. Well, I’ve been to equally fancy buildings at conference receptions at GAMM or AIP conferences already, but the distinguishing feature this evening was the program. As usual it featured welcome notes by important people (notably, the one by the official government representative was accurate and entertaining!), prizes and music. The music was great, the host (G.M. Ziegler) did a great job and the ceremony felt like a show rather than an opening reception.

From the prices I’d like to mention two:

After this reception I am looking even more forward to the rest of this conference.

As I a side note: Something seems to be wrong with me and optimization conferences. It seems like every time I visit such a conference, I am loosing my cell-phone. Happened to me at SIOPT 2011 in Darmstadt and happened to me again today…

### Like this:

Like Loading...