Today I’d like to blog about two papers which appeared on the arxiv.

**1. Regularization with the Augmented Lagrangian Method – Convergence Rates from Variational Inequalities **

The first one is the paper “Regularization of Linear Ill-posed Problems by the Augmented Lagrangian Method and Variational Inequalities” by Klaus Frick and Markus Grasmair.

Well, the title basically describes the content quite accurate. However, recall that the Augmented Lagrangian Method (ALM) is a method to calculate solutions to certain convex optimization problems. For a convex, proper and lower-semicontinuous function on a Banach space , a linear and bounded operator from into a Hilbert space and an element consider the problem

The ALM goes as follows: Start with an initial dual variable , choose step-sizes and iterate

(These days one should note that this iteration is also known under the name Bregman iteration…). Indeed, it is known that the ALM converges to a solution of (1) if there exists one. Klaus and Markus consider the ill-posed case, i.e. the range of is not closed and is replaced by some which fulfills (and hence, is generally not in the range of ). Then, the ALM does not converge but diverges. However, one observes “semi-convergence” in practice, i.e. the iterates approach an approximate “solution to ” (or even a true solution to ) first but then start to diverge from some point on. Then it is natural to ask, if the ALM with replaced by can be used for regularization, i.e. can one choose a stopping index (depending on and ) such that the iterates approach the solution of (1) if vanishes? The question has been answered in the affirmative in previous work by Klaus (here and here) and also estimates on the error and convergence rates have been derived under an additional assumption on the solution of (1). This assumption used to be what is called “source condition” and says that there should exist some such that for a solution of (1) it holds that

Under this assumption it has been shown that the Bregman distance goes to zero linearly in under appropriate stopping rules. What Klaus and Markus investigate in this paper are different conditions which ensure slower convergence rates than linear. These conditions come in the form of “variational inequalities” which gained some popularity lately. As usual, these variational inequalities look some kind of messy at first sight. Klaus and Markus use

for some positive functional with and some non-negative, strictly increasing and concave function . Under this assumption (and special ) they derive convergence rates which again look quite complicated but can be reduced to simpler and more transparent cases which resemble the situation one knows for other regularization methods (like ordinary Tikhonov regularization).

In the last section Klaus and Markus also treat sparse regularization (i.e. with ) and derive that a weak condition (like for some already imply the stronger one (1) (with a different ). Hence, interestingly, it seems that for sparse regularization one either gets a linear rate or nothing (in this framework).

**2. On necessary conditions for variational regularization **

The second paper is “Necessary conditions for variational regularization schemes” by Nadja Worliczek and myself. I have discussed some parts of this paper alread on this blog here and here. In this paper we tried to formalize the notion of “a variational method” for regularization with the goal to obtain necessary conditions for a variational scheme to be regularizing. As expected, this goal is quite ambitions and we can not claim that we came up with ultimate necessary condition which describe what kind of variational methods are not possible. However, we could first relate the three kinds of variational methods (which I called Tikhonov, Morozov and Ivanov regularization here) and moreover investigated the conditions on the data space a little closer. In recent years it turned out that one should not always use a term like to measure the noise or to penalize the deviation from to . For several noise models (like Poisson noise or multiplicative noise) other functionals are better suited. However, these functionals raise several issues: They are often not defined on a linear space but on a convex set, sometimes with the nasty property that their interior is empty. They often do not have convenient algebraic properties (e.g. scaling invariance, triangle inequalities or the like). Finally they are not necessarily (lower semi-)continuous with respect to the usual topologies. Hence, we approached the data space from quite abstract way: The data space is topological space which comes with an additional sequential convergence structure (see e.g. here) and on (a subset of) which there is a discrepancy functional . Then we analyzed the interplay of these three things , and . If you wonder why we use the additional sequential convergence structure, remember that in the (by now classical) setting for Tikhonov regularization in Banach spaces with a functional like

with some Banach space norms and there are also two kinds of convergence on : The weak convergence (which is replaced by in our setting) which is, e.g., used to describe convenient (lower semi-)continuity properties of and the norm and the norm convergence which is used to describe that for . And since we do not have a normed space in our setting and one does not use any topological properties of the norm convergence in all the proofs of regularizing properties, Nadja suggested to use a sequential convergence structure instead.

## Leave a Reply