I have an open position for a Scientific Assistant/PhD student available. The salary is according to TV-L EG 13. (Don’t know what that means? Have a look here.). The position starts at 01.09.2013 (earlier start is possible) and is initially limited to two years; further extension is possible and for a PhD student, at least three years are planned.

Candidates should

• have a degree (Masters or Diploma) in mathematics above average,
• have very good knowledge in numerical mathematics and functional analysis,
• have good knowledge in scientific computing or optimization,
• know German well and
• have strong interest in applied mathematics. Also, bring a high commitment for scientific research.

The responsibilities include

• participation in teaching and
• independent but supervised research in field of applied mathematics (especially mathematical imaging and inverse problems).

Please send applications including CV, copies of certificates and letters of recommendation (if any) in electronic form directly to me. Deadline is the 30.06.2013.

If you would like to post the job advertisement at you bulletin board, here’s the pdf file.

The scientific program at ISMP started today and I planned to write a small personal summary of each day. However, it is a very intense meeting. Lot’s of excellent talks, lot’s of people to meet and little spare time. So I’m afraid that I have to deviate from my plan a little bit. Instead of a summary of every day I just pick out a few events. I remark that these picks do not reflect quality, significance or something like this in any way. I just pick things for which I have something to record for personal reasons.

My day started after the first plenary which the session Testing environments for machine learning and compressed sensing in which my own talk was located. The session started with the talk by Michael Friedlander of the SPOT toolbox. Haven’t heard of SPOT yet? Take a look! In a nutshell its a toolbox which turns MATLAB into “OPLAB”, i.e. it allows to treat abstract linear operators like matrices. By the way, the code is on github.

The second talk was by Katya Scheinberg (who is giving a semi-planary talk on derivative free optimization at the moment…). She talked about speeding up FISTA by cleverly adjusting step-sizes and over-relaxation parameters and generalizing these ideas to other methods like alternating direction methods. Notably, she used the “SPEAR test instances” from our project homepage! (And credited them as “surprisingly hard sparsity problems”.)

My own talk was the third and last one in that session. I talked about the issue of constructing test instance for Basis Pursuit Denoising. I argued that the naive approach (which takes a matrix ${A}$, a right hand side ${b}$ and a parameter ${\lambda}$ and let some great solver run for a while to obtain a solution ${x^*}$) may suffer from “trusted method bias”. I proposed to use “reverse instance construction” which is: First choose ${A}$, ${\lambda}$ and the solution ${x^*}$ and the construct the right hand side ${b}$ (I blogged on this before here).

Last but not least, I’d like to mention the talk by Thomas Pock: He talked about parameter selection on variational models (think of the regularization parameter in Tikhonov, for example). In a paper with Karl Kunisch titled A bilevel optimization approach for parameter learning in variational models they formulated this as a bi-level optimization problem. An approach which seemed to have been overdue! Although they treat somehow simple inverse problems (well, denoising) (but with not so easy regularizers) it is a promising first step in this direction.

Recently Arnd Rösch and I organized the minisymposium “Parameter identification and nonlinear optimization” at the SIAM Conference on Optimization. One of the aims of this symposium was, to initiate more connections between the communities in optimal control of PDEs on the one hand and regularization of ill-posed problems on the other hand. To give a little bit of background, let me somehow formulate the “mother problems” in both fields:

Example 1 (Mother problem in optimal control of PDEs) We consider a bounded Lipschitz domain ${\Omega}$ in ${{\mathbb R}^2}$ (or ${{\mathbb R}^3}$). Assume that we are given a target (or desired state) ${y_d}$ which is a real valued function of ${\Omega}$. Our aim is to find a function (or control) ${u}$ (also defined on ${\Omega}$) such that the solution of the equation

$\displaystyle \begin{array}{rcl} \Delta y & = u,\ \text{ on }\ \Omega\\ y & = 0,\ \text{ on }\ \partial\Omega. \end{array}$

Moreover, our solution (or control) shall obey some pointwise bounds

$\displaystyle a\leq u \leq b.$

This motivates the following constrained optimization problem

$\displaystyle \begin{array}{rcl} \min_{y,u} \|y - y_d\|_{L^2}^2\quad \text{s.t.} & \Delta y = u,\ \text{ on }\ \Omega\\ & y = 0,\ \text{ on }\ \partial\Omega.\\ & a\leq u\leq b. \end{array}$

Often, also the regularized problem is considered: For a small ${\alpha>0}$ solve:

$\displaystyle \begin{array}{rcl} \min_{y,u} \|y - y_d\|_{L^2}^2 + \alpha\|u\|_{L^2}^2\quad \text{s.t.} & \Delta y = u,\ \text{ on }\ \Omega\\ & y = 0,\ \text{ on }\ \partial\Omega.\\ & a\leq u\leq b. \end{array}$

(This problem is also extensively treated section 1.2.1 in the excellent book “Optimal Control of Partial Differential Equations” by Fredi Tröltzsch.)

For inverse problems we may formulate:

Example 2 (Mother problem in inverse problems) Consider a bounded and linear operator ${A:X\rightarrow Y}$ between two Hilbert spaces and assume that ${A}$ has non-closed range. In this case, the pseudo-inverse ${A^\dagger}$ is not a bounded operator. Consider now, that we have measured data ${y^\delta\in Y}$ that is basically a noisy version of “true data” ${y\in\text{range}A}$. Our aim is, to approximate a solution of ${Ax=y}$ by the knowledge of ${y^\delta}$. Since ${A}$ does no have a closed range, it is usually the case that ${y^\delta}$ is not in the domain of the pseudo inverse and ${A^\dagger y^\delta}$ simply does not make sense. A widely used approach, also treated in my previous post is Tikhonov regularization, that is solving for a small regularization parameter ${\alpha>0}$

$\displaystyle \min_x\, \|Ax-y^\delta\|_Y^2 + \alpha\|x\|_X^2$

Clearly both mother problems have a very similar mathematical structure: We may use the solution operator of the PDE, denote it by ${A}$, and restate the mother problem of optimal control of PDEs in a form similar to the mother problem of inverse problems. However, there are some important conceptual differences:

Desired state vs. data: In Example 1 ${y_d}$ is a desired state which, however, may not be reachable. In Example 2 ${y^\delta}$ is noisy data and hence, shall not reached as good as possible.

Control vs. solution: In Example 1 the result ${u}$ is an optimal control. It’s form is not of prime importance, as long as it fulfills the given bounds and allows for a good approximation of ${y_d}$. In Example 2 the result ${x}$ is the approximate solution itself (which, of course shall somehow explain the measured data ${y^\delta}$). It’s properties are itself important.

Regularization: In Example 1 the regularization is mainly for numerical reasons. The problem itself also has a solution for ${\alpha=0}$. This is due to the fact that the set of admissible ${u}$ for a weakly compact set. However, in Example 2 one may not choose ${\alpha=0}$: First because the functional will not have a minimizer anymore and secondly one really does not want ${\|Ax-y^\delta\|}$ as small as possible since ${y^\delta}$ is corrupted by noise. Especially, the people from inverse problems are interested in the case in which both ${y^\delta\rightarrow y\in\text{range}A}$ and ${\alpha\rightarrow 0}$. However, in optimal control of PDEs, ${\alpha}$ is often seen as a model parameter which ensures that the control has somehow a small energy.

These conceptual difference sometimes complicate the dialog between the fields. One often runs into discussion dead ends like “Why should we care about decaying ${\alpha}$—it’s given?” or “Why do you need these bounds on ${u}$? This makes your problem worse and you may not reach to state as good as possible\dots”. It often takes some time until the involved people realize that they really pursue different goals, that the quantities which even have similar names are something different and that the minimization problems can be solved with the same techniques.

In our minisymposium we had the following talks:

• “Identification of an Unknown Parameter in the Main Part of an Elliptic PDE”, Arnd Rösch
• “Adaptive Discretization Strategies for Parameter Identification Problems in PDEs in the Context of Tikhonov Type and Newton Type Regularization”, Barbara Kaltenbacher
• “Optimal Control of PDEs with Directional Sparsity”, Gerd Wachsmuth
• “Nonsmooth Regularization and Sparsity Optimization” Kazufumi Ito
• ${L^1}$ Fitting for Nonlinear Parameter Identification Problems for PDEs”, Christian Clason
• “Simultaneous Identification of Multiple Coefficients in an Elliptic PDE”, Bastian von Harrach

Finally, there was my own talk “Error Estimates for joint Tikhonov and Lavrentiev Regularization of Parameter Identification Probelms” which is based on a paper with the similar name which is at http://arxiv.org/abs/0909.4648 and published in Applicable Analysis. The slides of the presentation are here (beware, there may be some wrong exponents in the pdf…).

In a nutshell, the message of the talk is: Bound both on the control/solution and the state/data may be added also to a Tikhonov-regularized inverse problem. If the operator has convenient mapping properties then the bounds will eventually be inactive if the true solution has the same property. Hence, the known estimates for usual inverse problems are asymptotically recovered.