Some time ago I picked up the phrase Ivanov regularization. Starting with an operator A:X\to Y between to Banach spaces (say) one encounters the problem of instability of the solution of Ax=y if A has non-closed range. One dominant tool to regularize the solution is called Tikhonov regularization and consists of minimizing the functional \|Ax - y^\delta\|_Y^p + \alpha \|x\|_Y^q. The meaning behind these terms is as follows: The term \|Ax -y^\delta \|_Y^p is often called discrepancy and it should be not too large to guarantee, that the “solution” somehow explains the data. The term \|x\|_Y^q is often called regularization functional and shall not be too large to have some meaningful notion of “solution”. The parameter \alpha>0 is called regularization parameter and allows weighting between the discrepancy and regularization.

For the case of Hilbert space one typically chooses p=q=2 and gets a functional for which the minimizer is given more or less explicitly as

x_\alpha = (A^*A + \alpha I)^{-1} A^* y^\delta.

The existence of this explicit solution seems to be one of the main reasons for the broad usage of Tikhonov regularization in the Hilbert space setting.

Another related approach is sometimes called residual method, however, I would prefer the term Morozov regularization. Here one again balances the terms “discrepancy” and “regularization” but in a different way: One solves

\min \|x\|_X\ \text{s.t.}\ \|Ax-y^\delta\|_Y\leq \delta.

That is, one tries to find an x with minimal norm which explains the data y^\delta up to an accuracy \delta. The idea is, that \delta reflects the so called noise level, i.e. an estimate of the error which is made during the measurment of y. One advantage of Morozov regularization over Tikhonov regularization is that the meaning of the parameter \delta>0 is much clearer that the meaning of \alpha>0. However, there is no closed form solution for Morozov regularization.

Ivanov regularization is yet another method: solve

\min \|Ax-y^\delta\|_Y\ \text{s.t.}\ \|x\|_X \leq \tau.

Here one could say, that one wants to have the smallest discrepancy among all x which are not too “rough”.

Ivanov regularization in this form does not have too many appealing properties: The parameter \tau>0 does not seem to have a proper motivation and moreover, there is again no closed for solution.

However, recently the focus of variational regularization (as all these method may be called) has shifted from using norms, to the use of more general functionals. One even considers Tikhonov in an abstract form as minimizing

S(Ax,y^\delta) + \alpha R(x)

with a “general” similarity measure S and a general regularization term R, see e.g. the dissertation of Christiane P√∂schl (which can be found here, thanks Christiane) or the works of Jens Flemming. Prominent examples for the similarity measure are of course norms of differences or the Kullback-Leibler divergence or the Itakura-Saito divergence which are both treated in this paper. For the regularization term one uses norms and semi-norms in various spaces, e.g. Sobolev (semi-)norms, Besov (semi-)norms, the total variation seminorm or \ell^p norms.

In all these cases, the advantage of Tikhonov regularization of having a closed form solution is not there anymore. Then, the most natural choice would be, in my opinion, Morozov regularization, because one may use the noise level directly as a parameter. However, from a practical point of view one also should care about the problem of calculating the minimizer of the respective problems. Here, I think that Ivanov regularization is important again: Often the similarity measure S is somehow smooth but the regularization term R is nonsmooth (e.g. for total variation regularization or sparse regularization with \ell^p-penalty). Hence, both Tikhononv and Morozov regularization have a nonsmooth objective function. Somehow, Tikhonov regularization is still a bit easier, since the minimization is unconstrained. Morozov regularization has a constraint which is usually quite difficult to handle. E.g. it is usually difficult (is it probably even ill posed?) to project onto the set defined by S(Ax,y^\delta)\leq \delta. Ivanov regulaization has a smooth objective functional (at least if the similarity measure is smooth) and a constraint which is usually somehow simple (i.e. projections are not too difficult to obtain).

Now, I found, that all thee methods, Tikhonov, Morozov and Ivanov regularizazion are all treated in the book “Theory of linear ill-posed problems and its applications” by V. K. Ivanov,V. V. Vasin and Vitaliń≠ Pavlovich Tanana in section 3.2, 3.3 and 3.4 respectively. Ivanov regularization goes under the name “method of quasi solutions” (section 3.2) and Morozov regularization is called “Method of residual”(section 3.4). Well, I think I should read these sections a bit closer now…

About these ads