May 2011

Coming back to regularization, especially Ivanov regularization. Recall that I used the term Ivanov regularization for the minimization problem

\displaystyle   \min S(Ax,y^\delta)\ \text{ s.t. }\ R(x)\leq \tau. \ \ \ \ \ (1)

I again stumbled upon some reference: It seems that in the case that the constraint {R(x)\leq \tau} defines a compact set, this method is usually referred to as “method of quasi solutions”. More precisely, I found this in “Elements of the theory of inverse problems” by A.M. Denisov, Chapter 6. There he uses metric spaces and proves the following:

Theorem 1 Let {X,Y} be metric spaces with metrics {d_X}, {d_Y}, respectively and {A:X\rightarrow Y} continuous. Furthermore let {M\subset X} be compact, {y^\dagger} be in the range of {A} and assume that {x^\dagger} is the unique solution of {Ax=y^\dagger} which lies in {M}. Finally for a {y^\delta} with {d_Y(y^\delta,y^\dagger)\leq\delta} define {X_\delta = \{x\ :\ d_Y(Ax,y^\delta)\leq\delta\}} and {X_\delta^M = X_\delta\cap M}. Then it holds for {\delta\rightarrow 0} that

\displaystyle  \sup_{x\in X_\delta^M}d_X(x,x^\dagger) \rightarrow 0.

Remark 1 Before we prove this theorem, we relate is to what I called Ivanov regularization above: The set {M} is encoded in~(1) as {M = \{x\ :\ R(x)\leq\tau\}} and the “discrepancy measure” {S} is simply the metric {d_Y}. Hence, let {x_M^\delta} denote a solution of

\displaystyle  \min\ d_Y(Ax,y^\delta)\ \text{ s.t. } x\in M.

Because {x^\dagger} is feasible for this problem it follows from {d_Y(Ax^\dagger,y^\delta) = d_Y(y^\dagger,y^\delta)\leq\delta} that {d_Y(Ax_M^\delta,y^\delta)\leq\delta}. Hence, {x_M^\delta\in X_M^\delta}. In other words: Ivanov regularization produces one element in the set {X_M^\delta}. Now, the theorem says that every element in {X_M^\delta} is a good approximation for {x^\dagger} (at least asymptotically).

Proof: We take a sequence {\delta_n\rightarrow 0} and assume to the contrary that there exist {\epsilon>0} such that for every {n} there exists {x_{\delta_n}\in X_M^{\delta_n}} such that it holds that {d_X(x_{\delta_n},x^\dagger)\geq \epsilon}. Since all {x_{\delta_n}} lie in {M} which is compact, there is a convergent subsequence {(x_k)} with limit {\bar x}. We obtain {d_X(\bar x,x^\dagger)\geq \epsilon}. However, this contradicts the assumption: d_Y(A\bar x,Ax^\dagger) & = &d_Y(A\bar x,y^\dagger) = \lim_{n\rightarrow \infty} d_Y(Ax_{\delta_n},y^\dagger) \nonumber
& \leq &\lim_{n\rightarrow \infty} d_Y(Ax_{\delta_n},y^{\delta_n}) + d_Y(y^{\delta_n},y^\dagger) \leq \lim_{n\rightarrow\infty}2\delta_n =0. \Box

Coming back to the interpretation of the Theorem~1 and Ivanov regularization: Instead of Ivanov regularization, one could also use the following feasibility problem: Find an {x} such that both {d_Y(Ax,y^\delta)\leq\delta} and {x\in M}. For the case of vector spaces {X} and {Y} and a convex set {M}, this would be a convex feasibility problem which one may attack by available methods.

A further important remark is that we did not assume any linearity on {A} (of course: we did not even assume a linear structure on {X} or {Y}). Hence, the theorem seem very powerful: There is no regularization parameter involved and one still gets convergence to the true solution! However, one of the assumptions in the theorem is somehow strong: The uniqueness of {x^\dagger}. To illustrate this we consider a special case:

Example 1 Let {X} and {Y} be real (or complex) vector spaces and {A} be linear with non-trivial null space. Furthermore, assume that {M\subset X} is convex and compact and consider scaled versions {\tau M} for {\tau>0}. Then the set of solutions of {Ax=y^\dagger} is an affine space in {X} and there are three cases for the intersection of this set and {\tau M}:

  1. The intersection is empty.
  2. The intersection is a convex set and contains infinitely many elements.
  3. The intersection contains exactly one element.

The last case occurs precisely, when the affine space of solution is tangential to {\tau M}. Loosely speaking, one may say that this case only occurs, if the set {M} is scaled precisely to the right size such that is only touches the affine space of solutions.

Another strong assumption in Theorem~1 is that the set {M} is compact. First there is a way to somehow relax this condition. Basically, we only need compactness to obtain the converging subsequence. Hence, one could try to work with a weaker topology on {Y} (which would result in a weaker notion of compactness) and then obtain a limit of a subsequence which converges in the weaker sense only. Then one would need some tool to deduce that the weak limit is indeed a solution. This strategy work, for example in Banach spaces:

Example 2 Let {X} and {Y} be reflexive Banach spaces and {A:X\rightarrow Y} be linear, bounded and one-to-one. We use the set {M = \{x\ :\ \|x\|_X\leq R\}} as prior knowledge on the solution of {Ax=y^\dagger}. Moreover, we use the metrics induced the norms of {X} and {Y}, respectively: {d_X(x,x') = \|x-x'\|_X} and {d_Y(y,y') = \|y-y'\|_Y}.

Obviously, {M} is not compact (if {X} is of infinite dimension) but it is weakly compact (and by the Eberlein-Smulian theorem also weakly-sequentially compact). In the situation of the proof of Theorem~1 we only get a weakly converging subsequence {x_{\delta_n}\rightharpoonup \bar x}. However, a linear operator {A} is also weak-to-weak linear, and hence {Ax_{\delta_n}\rightharpoonup A\bar x}. While we only have a weakly converging sequence, we still can obtain the contradiction in~(0) since the norm is weakly lower semicontinuous.

Another way to justify the assumption that the solution is in a known compact set, is that in practice we always use a representation of the solution which only use a finite number of degrees of freedom (think of a Galerkin ansatz for example). However, this interpretation somehow neglects that we are interested in finding the true solution to the true infinite dimensional problem and that the discretization of the problem should be treated as a different issue. Just building on the regularizing effect of discretization will almost surely result in a method which stability properties depend on the resolution of the discretization.

Finally: Another good reference of these somehow ancient results in regularization theory is one of the first books on this topics: “Solutions of ill-posed problems” by Tikhonov and Arsenin (1977). While it took me some time to get used the type of presentation, I have to admit that it is really worth to read this book (and other translation of Russian mathematical literature).

Recently Arnd Rösch and I organized the minisymposium “Parameter identification and nonlinear optimization” at the SIAM Conference on Optimization. One of the aims of this symposium was, to initiate more connections between the communities in optimal control of PDEs on the one hand and regularization of ill-posed problems on the other hand. To give a little bit of background, let me somehow formulate the “mother problems” in both fields:

Example 1 (Mother problem in optimal control of PDEs) We consider a bounded Lipschitz domain {\Omega} in {{\mathbb R}^2} (or {{\mathbb R}^3}). Assume that we are given a target (or desired state) {y_d} which is a real valued function of {\Omega}. Our aim is to find a function (or control) {u} (also defined on {\Omega}) such that the solution of the equation

\displaystyle  \begin{array}{rcl}  \Delta y & = u,\ \text{ on }\ \Omega\\ y & = 0,\ \text{ on }\ \partial\Omega. \end{array}

Moreover, our solution (or control) shall obey some pointwise bounds

\displaystyle  a\leq u \leq b.

This motivates the following constrained optimization problem

\displaystyle  \begin{array}{rcl}  \min_{y,u} \|y - y_d\|_{L^2}^2\quad \text{s.t.} & \Delta y = u,\ \text{ on }\ \Omega\\ & y = 0,\ \text{ on }\ \partial\Omega.\\ & a\leq u\leq b. \end{array}

Often, also the regularized problem is considered: For a small {\alpha>0} solve:

\displaystyle  \begin{array}{rcl}  \min_{y,u} \|y - y_d\|_{L^2}^2 + \alpha\|u\|_{L^2}^2\quad \text{s.t.} & \Delta y = u,\ \text{ on }\ \Omega\\ & y = 0,\ \text{ on }\ \partial\Omega.\\ & a\leq u\leq b. \end{array}

(This problem is also extensively treated section 1.2.1 in the excellent book “Optimal Control of Partial Differential Equations” by Fredi Tröltzsch.)

For inverse problems we may formulate:

Example 2 (Mother problem in inverse problems) Consider a bounded and linear operator {A:X\rightarrow Y} between two Hilbert spaces and assume that {A} has non-closed range. In this case, the pseudo-inverse {A^\dagger} is not a bounded operator. Consider now, that we have measured data {y^\delta\in Y} that is basically a noisy version of “true data” {y\in\text{range}A}. Our aim is, to approximate a solution of {Ax=y} by the knowledge of {y^\delta}. Since {A} does no have a closed range, it is usually the case that {y^\delta} is not in the domain of the pseudo inverse and {A^\dagger y^\delta} simply does not make sense. A widely used approach, also treated in my previous post is Tikhonov regularization, that is solving for a small regularization parameter {\alpha>0}

\displaystyle  \min_x\, \|Ax-y^\delta\|_Y^2 + \alpha\|x\|_X^2

Clearly both mother problems have a very similar mathematical structure: We may use the solution operator of the PDE, denote it by {A}, and restate the mother problem of optimal control of PDEs in a form similar to the mother problem of inverse problems. However, there are some important conceptual differences:

Desired state vs. data: In Example 1 {y_d} is a desired state which, however, may not be reachable. In Example 2 {y^\delta} is noisy data and hence, shall not reached as good as possible.

Control vs. solution: In Example 1 the result {u} is an optimal control. It’s form is not of prime importance, as long as it fulfills the given bounds and allows for a good approximation of {y_d}. In Example 2 the result {x} is the approximate solution itself (which, of course shall somehow explain the measured data {y^\delta}). It’s properties are itself important.

Regularization: In Example 1 the regularization is mainly for numerical reasons. The problem itself also has a solution for {\alpha=0}. This is due to the fact that the set of admissible {u} for a weakly compact set. However, in Example 2 one may not choose {\alpha=0}: First because the functional will not have a minimizer anymore and secondly one really does not want {\|Ax-y^\delta\|} as small as possible since {y^\delta} is corrupted by noise. Especially, the people from inverse problems are interested in the case in which both {y^\delta\rightarrow y\in\text{range}A} and {\alpha\rightarrow 0}. However, in optimal control of PDEs, {\alpha} is often seen as a model parameter which ensures that the control has somehow a small energy.

These conceptual difference sometimes complicate the dialog between the fields. One often runs into discussion dead ends like “Why should we care about decaying {\alpha}—it’s given?” or “Why do you need these bounds on {u}? This makes your problem worse and you may not reach to state as good as possible\dots”. It often takes some time until the involved people realize that they really pursue different goals, that the quantities which even have similar names are something different and that the minimization problems can be solved with the same techniques.

In our minisymposium we had the following talks:

  • “Identification of an Unknown Parameter in the Main Part of an Elliptic PDE”, Arnd Rösch
  • “Adaptive Discretization Strategies for Parameter Identification Problems in PDEs in the Context of Tikhonov Type and Newton Type Regularization”, Barbara Kaltenbacher
  • “Optimal Control of PDEs with Directional Sparsity”, Gerd Wachsmuth
  • “Nonsmooth Regularization and Sparsity Optimization” Kazufumi Ito
  • {L^1} Fitting for Nonlinear Parameter Identification Problems for PDEs”, Christian Clason
  • “Simultaneous Identification of Multiple Coefficients in an Elliptic PDE”, Bastian von Harrach

Finally, there was my own talk “Error Estimates for joint Tikhonov and Lavrentiev Regularization of Parameter Identification Probelms” which is based on a paper with the similar name which is at and published in Applicable Analysis. The slides of the presentation are here (beware, there may be some wrong exponents in the pdf…).

In a nutshell, the message of the talk is: Bound both on the control/solution and the state/data may be added also to a Tikhonov-regularized inverse problem. If the operator has convenient mapping properties then the bounds will eventually be inactive if the true solution has the same property. Hence, the known estimates for usual inverse problems are asymptotically recovered.

There are still some things left, I wanted to add about the issue of weak-* convergence in {L^\infty}, non-linear distortions and Young measures. The first is, that Young measures are not able to describe all effects of weak-* convergence, namely, the notion does not handle contractions properly. The second thing is, that there is an alternative approach based on the {\chi}-function which I also find graphically appealing.

1. Concentrations and Young measures

One can distinguish several “modes” that a sequence of functions can obey: In this blog entry of Terry Tao he introduces four more modes apart from oscillation:

  1. escape to horizontal infinity
  2. escape to width infinity
  3. escape to vertical infinity
  4. typewriter sequence

1.1. Escape to horizontal infinity

This mode is most easily described by the sequence {f_n = \chi_{[n,n+1]}}, i.e. the characteristic functions on an interval of unit length which escapes to infinity. Obviously, this sequence does not convergence in any {L^p({\mathbb R})} norm and its weak convergence depends on {p}:

For {p>1} the sequence does converge weakly (weakly-* for {p=\infty}) to zero. This can be seen as follows: Assume that for some non-negative {g\in L^q} (with {1/p + 1/q = 1}) we have {\epsilon \leq \int g f_n = \int_n^{n+1} g}. The we get with Hölders inequality that {\epsilon^q \leq \int_n^{n+1} |g|^q}. But this contradicts the fact that {g\in L^q}.

For {p=1} the sequence does not convergence weakly to zero as can be seen by testing it with the function {g \equiv 1} and also does not converge weakly at all (test with {g = \sum_n (-1)^n\chi_{[n,n+1]}} and observe that the dual pairings do not converge).

However, this type of convergence does not occur in bounded domains, and hence, can not be treated with Young measures as they have been introduced the in my previous entry.

1.2. Escape to width infinity

The paradigm for this mode of convergence is {f_n = \frac{1}{n}\chi_{[0,n]}}. This sequence even convergence strongly in {L^p} for {p>1} but not in strongly in {L^1}. However, it converges weakly to 0 in {L^1}. This mode needs, similar to the previous mode, an unbounded domain.

1.3. Escape to vertical infinity

The prime example, normalized in {L^2(]-1,1[)}, for this mode is {f_n = \sqrt{n}\chi_{[-1/n,1/n]}}. By testing with continuous functions (which is enough by density) one sees that the weak limit is zero.

If one wants to assign some limit to this sequence {f_n} one can say that the measure {f_n^2\mathfrak{L}} does converge weakly in the sense of measures to {2\delta_0}, i.e. twice the point-mass in zero.

Now, what does the Young measure say here?

We check narrow convergence of the Young measures {\mu^{f_n}} by testing with a function of type {\psi(x,y) = \chi_B(x)\phi(y)} for a Borel set {B} and bounded continuous function {\phi}. Then we get for {n\rightarrow \infty}

\displaystyle  |\int_{\Omega\times{\mathbb R}} \psi(x,y){\mathrm d}\mu^{f_n}(x,y)| \leq \int_{-1/n}^{1/n}|\psi(\sqrt{n})|{\mathrm d}\mathfrak{L}(x)\rightarrow 0


\displaystyle  \mu^{f_n}\rightharpoonup 0.

We conclude that this mode of convergence can not be seen by Young measures. As Attouch et. al say in Section 4.3.7: “Young measures do not capture concentrations”.

1.4. Typewriter sequence

A typewriter sequence on an interval (as described in Example 4 of this blog entry of Terry Tao) is a sequence of functions which are mostly zero and the non-zero places revisit every place of the interval again, however, with smaller support and integral. This is an example of a sequence which converges in the {L^1}-norm but not pointwise at any point. However, this mode of convergence is not very interesting with respect to Young measures. It basically behaves like “Escape to vertical infinity” above.

2. Weak convergence via the {\chi}-function

While Young measures put a uniformly distributed measure on the graph of the function, and thus, are a more “graphical” representation of the function, the approach described now uses the area between the graph and the {x}-axis.

We consider an open and bounded domain {\Omega\subset {\mathbb R}^d}. Now we define the {\chi}-function as {\chi:{\mathbb R}\times{\mathbb R}\rightarrow {\mathbb R}}

\displaystyle  \chi(\xi,u) = \begin{cases} 1, & 0<\xi<u\\ -1, & u<\xi<0\\ 0, &\text{else}. \end{cases}

The function looks like this:

We then associate to a given function {u:\Omega\rightarrow{\mathbb R}} the function {U:(x,\xi) \mapsto \chi(\xi,u(x))}. Graphically, this function has the value {1}, if the value {\xi} is positive and between zero and {u(x)} and it is {-1}, if {\xi} is negative and again between zero and {u(x)}. In other words: the function {(x,\xi)\mapsto \chi(\xi,u(x))} is piecewise constant of the area between zero and the graph of {u} encoding the sign of {u}. For the functions {f_n} from this Example 1 in the previous post this looks like this:

Similar to the approach via Young measure, we now consider the sequence of the new objects, i.e. the sequence of {(x,\xi)\mapsto \chi(x,f_n(x))} and use a weak form of convergence here. For Young measures we used narrow convergence and here we use simple weak-* convergence.

On can show the following lemma:

Lemma 1 Assume that {f_n} converges weakly-* in {L^\infty({\mathbb R})}. Then, for the weak-* limit of the mappings {(x,\xi)\mapsto\chi(\xi,f_n(x))}, denoted by {F} there exists a probability measure {\nu_x} such that

\displaystyle  \partial_\xi F(x,\cdot) = \delta_0 - \nu_x.

The proof (in a slightly different situation) can be found in Kinetic Formulation of Conservation Laws, Lemma 2.3.1.

Example 1 We again consider Example 1 from my previous post:

\displaystyle  f_n(x) = \begin{cases} a & \text{for }\ \tfrac{2k}{n} \leq x < \tfrac{2k+1}{n},\ k\in{\mathbb Z}\\ b & \text{else.} \end{cases}

The graph of some {f_n} and the corresponding function {F_n:(x,\xi) \mapsto \chi(x,f_n(x))} was shown above. Obviously the weak-* limit of these {\chi}-functions {F_n} is (in the case {b<0<a}) given by

\displaystyle  F(x,\xi) = \begin{cases} \tfrac12, & 0 < \xi < a\\ -\tfrac12, & b< \xi < 0. \end{cases}

This can be illustrated as

Now take the weak derivative with respect to {\xi} (which is, as the function {F} itself, independent of {x}) to get

\displaystyle  \partial_\xi F(\cdot,x) = \delta_0 - \tfrac12 (\delta_a + \delta_b)

and, comparing with Lemma~1, we see

\displaystyle  \nu_x = \tfrac12 (\delta_a + \delta_b).

Cool: That is precisely the same limit as obtained by the Young measure!

Well, the observation in this example is not an accident and indeed this approach is closely related to Young measures. Namely, it holds that {\mu^{f_n}_x\rightharpoonup \nu_x}.

Maybe, I’ll come back to the proof of this fact later (which seemed not too hard, but used a different definitions of a Young measure I used here).

To conclude: Both the approach via Young measures and the approach via the {\chi}-function lead to the same new understanding of weak-* limits in {L^\infty}. This new understanding is a little bit deeper than the usual one as it allows to goes well with non-linear distortions of functions. And finally: Both approaches use a geometric approach: Young measures put a equidistributed measure on the graph of the function and the {\chi}-function puts {\pm1} between the graph and the {x}-axis.

This entry is not precisely about some thing I stumbled upon but about some thing a that I wanted to learn for some time now, namely Young measures. Lately I had a several hour train ride and I had the book Kinetic Formulation of Conservation Laws with me.

While the book is about hyperbolic PDEs and their formulation as kinetic equation, it also has some pointers to Young measures. Roughly, Young measures are a way to describe weak limits of functions and especially to describe how these weak limits behave under non-linear functions, and hence, we start with this notation.

1. Weak convergence of functions

We are going to deal with sequences of function {(f_n)} in spaces {L^p(\Omega)} for some open bounded domain {\Omega} and some {1\leq p\leq \infty}.

For {1\leq p < \infty} the dual space of {L^p(\Omega)} is {L^q(\Omega)} with {1/p + 1/q = 1} and the dual pairing is

\displaystyle \langle f,g\rangle_{L^p(\Omega)\times L^q(\Omega)} = \int_\Omega f\, g.

Hence, a sequence {(f_n)} converges weakly in {L^p(\Omega)} to {f}, if for all {g\in L^p(\Omega)} it holds that

\displaystyle \int f_n\, g \rightarrow \int f\, g.

We denote weak convergence (if the space is clear) with {f_n\rightharpoonup f}.

For the case {p=\infty} one usually uses the so-called weak-* convergence: A sequence {(f_n)} in {L^\infty(\Omega)} converges weakly-* to {f}, if for all {g\in L^1(\Omega)} it holds that

\displaystyle \int f_n\, g \rightarrow \int f\, g.

The reason for this is, that the dual space of {L^\infty(\Omega)} is not easily accessible as it can not be described as a function space. (If I recall correctly, this is described in “Linear Operators”, by Dunford and Schwarz.) Weak-* convergence will be denoted by {f_n\rightharpoonup^* f}.

In some sense, it is enough to consider weak-* convergence in {L^\infty(\Omega)} to understand what’s that about with Young measures and I will only stick to this kind of convergence here.

Example 1 We consider {\Omega = [0,1]} and two values {a,b\in{\mathbb R}}. We define a sequence of functions which jumps between these two values with an increasing frequency:

\displaystyle f_n(x) = \begin{cases} a & \text{for }\ \tfrac{2k}{n} \leq x < \tfrac{2k+1}{n},\ k\in{\mathbb Z}\\ b & \text{else.} \end{cases}

The functions {f_n} look like this:

To determine the weak limit, we test with very simple functions, lets say with {g = \chi_{[x_0,x_1]}}. Then we get

\displaystyle \int f_n\, g = \int_{x_0}^{x_1} f_n \rightarrow (x_1-x_0)\tfrac{a+b}{2}.

Hence, we see that the weak-* limit of the {f_n} (which is, by the way, always unique) has no other chance than being

\displaystyle f \equiv \frac{a+b}{2}.

In words: the weak-* limit converges to the arithmetic mean of the two values between which the functions oscillate.

2. Non-linear distortions

Now, the norm-limit behaves well under non-linear distortions of the functions. Let’s consider a sequence {f_n} which converges in norm to some {f}. That is, {\|f_n -f\|_\infty \rightarrow 0}. Since this means that {\sup| f_n(x) - f(x)| \rightarrow 0} we see that for any boundedcontinuous function {\phi:{\mathbb R}\rightarrow {\mathbb R}} we also have {\sup |\phi(f_n(x)) - \phi(f(x))|\rightarrow 0} and hence {\phi\circ f_n \rightarrow \phi\circ f}.

The same is totally untrue for weak-* (and also weak) limits:

Example 2 Consider the same sequence {(f_n)} as in example~1which has the weak-* limit {f\equiv\frac{a+b}{2}}. As a nonlinear distortion we take {\phi(s) = s^2} which gives

\displaystyle \phi\circ f_n(x) = \begin{cases} a^2 & \text{for }\ \tfrac{2k}{n} \leq x < \tfrac{2k+1}{n},\ k\in{\mathbb Z}\\ b^2 & \text{else.} \end{cases}

Now we see

\displaystyle \phi\circ f_n \rightharpoonup^* \frac{a^2 + b^2}{2} \neq \Bigl(\frac{a+b}{2}\Bigr)^2 = \phi\circ f.

The example can be made a little bit more drastically by assuming {b = -a} which gives {f_n\rightharpoonup^* f\equiv 0}. Then, for every {\phi} with {\phi(0) = 0} we have {\phi\circ f\equiv 0}. However, with such a {\phi} we may construct any constant value {c} for the weak-* limit of {\phi\circ f_n} (take, e.g. {\phi(b) = 0}, {\phi(a) = 2c}).

In fact, the relation {\phi\circ f_n \rightharpoonup^* \phi\circ f} is only true for affine linear distortions {\phi} (unfortunately I forgot a reference for this fact\dots).

It arises the question, if it is possible to describe the weak-* limits of distortions of functions and if fact, this will be possible with the notions of Young measure.

3. Young measures

In my understanding, Young measures are a method to view a function somehow a little bit more geometrically in giving more emphasis on the graph of the function rather than is mapping property.

We start with defining Young measures and illustrate how they can be used to describe weak(*) limits. In what follows we use {\mathfrak{L}} for the Lebesgue measure on the (open and bounded) set {\Omega}. A more through description in the spirit of this section is Variational analysis in Sobolev and BV spaces by Attouch, Buttazzo and Michaille.

Definition 1 (Young measure) A positive measure {\mu} on {\Omega\times {\mathbb R}} is called a Young measureif for every Borel subset {B} of {\Omega} it holds that

\displaystyle \mu(B\times{\mathbb R}) = \mathfrak{L}(B).

Hence, a Young measure is a measure such that the measure of every box {B\times{\mathbb R}} is determined by the projection of the box onto the set {\Omega}, i.e. the intersection on {B\times{\mathbb R}} with {\Omega} which is, of course, {B}:

There are special Young measures, namely these, who are associated to functions. Roughly spoken, a Young measure associated to a function {u:\Omega\rightarrow {\mathbb R}} is a measure which is equidistributed on the graph of {u}.

Definition 2 (Young measure associated to {u}) For a Borel measurable function {u:\Omega\rightarrow{\mathbb R}} we define the associated Young measure{\mu^u} by defining for every continuous and bounded function {\phi:\Omega\times{\mathbb R}\rightarrow{\mathbb R}}

\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega \phi(x,u(x)){\mathrm d} \mathfrak{L}(x).

It is clear that {\mu^u} is a Young measure: Take {B\subset \Omega} and approximate the characteristic function {\chi_{B\times{\mathbb R}}} by smooth functions {\phi_n}. Then

\displaystyle \int_{\Omega\times{\mathbb R}}\phi_n(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega \phi_n(x,u(x)){\mathrm d} \mathfrak{L}(x).

The left hand side converges to {\mu^u(B\times{\mathbb R})} while the right hand side converges to {\int_B 1{\mathrm d}{\mathfrak{L}} = \mathfrak{L}(B)} as claimed.

The intuition that a Young measure associated to a function is an equidistributed measure on the graph can be made more precise by “slicing” it:

Definition 3 (Slicing a measure) Let {\mu} be a positive measure on {\Omega\times{\mathbb R}} and let {\sigma} be its projection onto {\Omega} (i.e. {\sigma(B) = \mu(B\times{\mathbb R})}). Then {\mu} is sliced into measures {(\sigma_x)_{x\in\Omega}}, i.e. it holds:

  1. Each {\mu_x} is a probability measure.
  2. The mapping {x\mapsto \int_{\mathbb R} \phi(x,y){\mathrm d}{\mu_x(y)}} is measurable for every continuous {\phi} and it holds that

    \displaystyle \int_{\Omega\times{\mathbb R}} \phi(x,y){\mathrm d}{\mu(x,y)} = \int_\Omega\int_{\mathbb R} \phi(x,y){\mathrm d}{\mu_x(y)}{\mathrm d}{\sigma(x)}.

The existence of the slices is, e.g. proven in Variational analysis in Sobolev and BV spaces, Theorem 4.2.4.

For the Young measure {\mu^u} associated to {u}, the measure {\sigma} in Definition~3is {\mathfrak{L}} and hence:

\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega\int_{\mathbb R} \phi(x,y){\mathrm d}{\mu^u_x(y)}{\mathrm d}{\mathfrak{L}(x)}.

On the other hand:

\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega\phi(x,u(x)){\mathrm d}{\mathfrak{L}} = \int_\Omega\int_{\mathbb R} \phi(x,y) {\mathrm d}{\delta_{u(x)}(y)}{\mathrm d}{\mathfrak{L}(x)}

and we see that {\mu^u} slices into

\displaystyle \mu^u_x = \delta_{u(x)}

and this can be vaguely sketched:

4. Narrow convergence of Young measures and weak* convergence in {L^\infty(\Omega)}

Now we ask ourself: If a sequence {(u^n)} converges weakly* in {L^\infty(\Omega)}, what does the sequence of associated Young measures do? Obviously, we need a notion for the convergence of Young measures. The usual notion here, is that of narrow convergence:

Definition 4 (Narrow convergence of Young measures) A sequence {(\mu_n)} of Young measures on {\Omega\times{\mathbb R}} converges narrowly to {\mu}, if for all bounded and continuous functions {\phi} it holds that

\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu_n(x,y)} \rightarrow \int_{\Omega\times{\mathbb R}} \phi(x,y){\mathrm d}{\mu(x,y)}.

Narrow convergence will also be denoted by {\mu_n\rightharpoonup\mu}.

One may also use the non-continuous test functions of the form {\phi(x,y) = \chi_B(x)\psi(y)} with a Borel set {B\subset\Omega} and a continuous and bounded {\psi}, leading to the same notion.

The set of Young measures is closed under narrow convergence, since we may test with the function {\phi(x,y) = \chi_B(x)\chi_{\mathbb R}(y)} to obtain:

\displaystyle \mathfrak{L}(B) = \lim_{n\rightarrow\infty} \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu_n(x,y)} = \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu(x,y)} = \mu(B\times E).

The next observation is the following:

Proposition 5 Let {(u^n)} be a bounded sequence in {L^\infty(\Omega)}. Then the sequence {(\mu^{u_n})} of associated Young measures has a subsequence which converges narrowly to a Young measure {\mu}.

The proof uses the notion of tightness of sets of measures and the Prokhorov compactness theorem for Young measures (Theorem 4.3.2 in Variational analysis in Sobolev and BV spaces).

Example 3 (Convergence of the Young measures associated to Example 1) Consider the functions {f_n} from Example~1and the associated Young measures {\mu^{f_n}}. To figure out the narrow limit of these Young measures we test with a function {\phi(x,y) = \chi_B(x)\psi(y)} with a Borel set {B} and a bounded and continuous function {\psi}. We calculate

\displaystyle \begin{array}{rcl} \int_{[0,1]\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^{f_n}(x,y)} &= &\int_0^1\phi(x,f_n(x)){\mathrm d}{\mathfrak{L}(x)}\\ & = &\int_B\psi(f_n(x)){\mathrm d}{\mathfrak{L}(x)}\\ & \rightarrow &\mathfrak{L}(B)\frac{\psi(a)+\psi(b)}{2}\\ & = & \int_B\frac{\psi(a)+\psi(b)}{2}{\mathrm d}{\mathfrak{L}(x)}\\ & = & \int_{[0,1]}\int_{\mathbb R}\phi(x,y){\mathrm d}{\bigl(\tfrac{1}{2}(\delta_a+\delta_b)\bigr)(y)}{\mathrm d}{\mathfrak{L}(y)}. \end{array}

We conclude:

\displaystyle \mu^{f_n} \rightharpoonup \tfrac{1}{2}(\delta_a+\delta_b)\otimes\mathfrak{L}

i.e. the narrow limit of the Young measures {\mu^{f_n}} is notthe constant function {(a+b)/2} but the measure { \mu = \tfrac{1}{2}(\delta_a+\delta_b)\otimes\mathfrak{L}}. This expression may be easier to digest in sliced form:

\displaystyle \mu_x = \tfrac{1}{2}(\delta_a+\delta_b)

i.e. the narrow limit is something like the “probability distribution” of the values of the functions {f_n}. This can be roughly put in a picture:

Obviously, this notion of convergence goes well with nonlinear distortions:

\displaystyle \mu^{\phi\circ f^n} \rightharpoonup \tfrac{1}{2}(\delta_{\phi(a)} + \delta{\phi(b)})\otimes\mathfrak{L}.

Recall from Example~1: The weak-* limit of {\phi\circ f_n} was the constant function {\tfrac{\phi(a)+\phi(b)}{2}}, i.e.

\displaystyle \phi\circ f_n \rightharpoonup^* \tfrac{\phi(a)+\phi(b)}{2}\chi_{[0,1]}.

The observation from the previous example is in a similar way true for general weakly-* converging sequences {f_n}:

Theorem 6 Let {f_n\rightharpoonup^* f} in {L^\infty(\Omega)} with {\mu^{f_n}\rightharpoonup\mu}. Then it holds for almost all {x} that

\displaystyle f(x) = \int_{\mathbb R} y{\mathrm d}{\mu_x(y)}.

In other words: {f(x)} is the expectation of the probability measure {\mu_x}.

Some time ago I picked up the phrase Ivanov regularization. Starting with an operator A:X\to Y between to Banach spaces (say) one encounters the problem of instability of the solution of Ax=y if A has non-closed range. One dominant tool to regularize the solution is called Tikhonov regularization and consists of minimizing the functional \|Ax - y^\delta\|_Y^p + \alpha \|x\|_Y^q. The meaning behind these terms is as follows: The term \|Ax -y^\delta \|_Y^p is often called discrepancy and it should be not too large to guarantee, that the “solution” somehow explains the data. The term \|x\|_Y^q is often called regularization functional and shall not be too large to have some meaningful notion of “solution”. The parameter \alpha>0 is called regularization parameter and allows weighting between the discrepancy and regularization.

For the case of Hilbert space one typically chooses p=q=2 and gets a functional for which the minimizer is given more or less explicitly as

x_\alpha = (A^*A + \alpha I)^{-1} A^* y^\delta.

The existence of this explicit solution seems to be one of the main reasons for the broad usage of Tikhonov regularization in the Hilbert space setting.

Another related approach is sometimes called residual method, however, I would prefer the term Morozov regularization. Here one again balances the terms “discrepancy” and “regularization” but in a different way: One solves

\min \|x\|_X\ \text{s.t.}\ \|Ax-y^\delta\|_Y\leq \delta.

That is, one tries to find an x with minimal norm which explains the data y^\delta up to an accuracy \delta. The idea is, that \delta reflects the so called noise level, i.e. an estimate of the error which is made during the measurment of y. One advantage of Morozov regularization over Tikhonov regularization is that the meaning of the parameter \delta>0 is much clearer that the meaning of \alpha>0. However, there is no closed form solution for Morozov regularization.

Ivanov regularization is yet another method: solve

\min \|Ax-y^\delta\|_Y\ \text{s.t.}\ \|x\|_X \leq \tau.

Here one could say, that one wants to have the smallest discrepancy among all x which are not too “rough”.

Ivanov regularization in this form does not have too many appealing properties: The parameter \tau>0 does not seem to have a proper motivation and moreover, there is again no closed form solution.

However, recently the focus of variational regularization (as all these method may be called) has shifted from using norms to the use of more general functionals. For example one considers Tikhonov in an abstract form as minimizing

S(Ax,y^\delta) + \alpha R(x)

with a “general” similarity measure S and a general regularization term R, see e.g. the dissertation of Christiane Pöschl (which can be found here, thanks Christiane) or the works of Jens Flemming. Prominent examples for the similarity measure are of course norms of differences or the Kullback-Leibler divergence or the Itakura-Saito divergence which are both treated in this paper. For the regularization term one uses norms and semi-norms in various spaces, e.g. Sobolev (semi-)norms, Besov (semi-)norms, the total variation seminorm or \ell^p norms.

In all these cases, the advantage of Tikhonov regularization of having a closed form solution is not there anymore. Then, the most natural choice would be, in my opinion, Morozov regularization, because one may use the noise level directly as a parameter. However, from a practical point of view one also should care about the problem of calculating the minimizer of the respective problems. Here, I think that Ivanov regularization is important again: Often the similarity measure S is somehow smooth but the regularization term R is nonsmooth (e.g. for total variation regularization or sparse regularization with \ell^p-penalty). Hence, both Tikhononv and Morozov regularization have a nonsmooth objective function. Somehow, Tikhonov regularization is still a bit easier, since the minimization is unconstrained. Morozov regularization has a constraint which is usually quite difficult to handle. E.g. it is usually difficult (is it probably even ill posed?) to project onto the set defined by S(Ax,y^\delta)\leq \delta. Ivanov regularization has a smooth objective functional (at least if the similarity measure is smooth) and a constraint which is usually somehow simple (i.e. projections are not too difficult to obtain).

Now, I found, that all thee methods, Tikhonov, Morozov and Ivanov regularizazion are all treated in the book “Theory of linear ill-posed problems and its applications” by V. K. Ivanov,V. V. Vasin and Vitaliĭ Pavlovich Tanana in section 3.2, 3.3 and 3.4 respectively. Ivanov regularization goes under the name “method of quasi solutions” (section 3.2) and Morozov regularization is called “Method of residual”(section 3.4). Well, I think I should read these sections a bit closer now…

Will this be the start of a mathematical blog of myself? Maybe, but maybe not.

The reason for the start of this blog was basically the observation that other people use a blog in a way which helps them to do and especially organize their research. While searching things on the web, I occasionally stumble upon things I consider interesting and my usual procedure was to do one of the following things:

  •  Scribble down a note on a piece of paper which I put to the other small notes on my desk.
  • Download some document and store it on some place one my computer (in fact I have a folder which I called “Archivieren” which means: “To be archived”.)
  • Do nothing special but just try to remember the place where I found the information.

None of these ways seemed to be working well and I have the feeling that in every three cases it happened frequently that I did not use the information in the best way I could. I am going to try to use this blog as another option to keep track of things I find and think about. Let’s see how this will evolve…