September 2011
Monthly Archive
September 26, 2011
This last post on uncertainty principles will be probably the hardest one for me. As said in my first post, I supervised a Master’s thesis and posed the very vague question
“Why are the uncertainty principles for the windowed Fourier transform and the wavelet transform so different?”
I had different things in mind:
- The windowed Fourier transform can be generalized to arbitrary dimensions easily. Especially, the underlying Weyl-Heisenberg group can be generalized to arbitrary dimensions. Interestingly, the uncertainty principle carries over almost exactly: For the windowed Fourier transform in dimensions, the uncertainty principle reads as
and again, this inequality is sharp for the multivariate Gaussians. A generalization of the wavelet transform is by no means canonical. The sprit in one dimension was to use translation and scaling. However, in higher dimensions there are a lot more geometric transformations you can apply: rotations, anisotropic scalings and shearing. Here one has to identify a suitable group of actions and try to carry all things over. The most naive way, which uses isotropic scaling and rotation does lead to uncertainty relations but no function will make these inequalities sharp…
- The lower bound in the Heisenberg uncertainty principle is fixed (for normed ). However, the lower bound in the affine uncertainty (equation (1) in my previous post) is not fixed (for normed ). Indeed can be arbitrarily small. Hence, a function which makes the inequality sharp may not lead to the minimum product of the corresponding operator variances. For other wavelet-like transformations (i.e. they include some kind of scaling) this is the same.
- The Heisenberg uncertainty principle has a clear and crisp interpretation involving the product of the variances for a function and its Fourier transform. There is no such thing available for the affine uncertainty principle. (In fact, this question was not addressed in the thesis but in the paper “Do uncertainty minimizers attain minimal uncertainty” and the Diploma thesis by Bastian Kanning).
The outcome was the (german) thesis Unschärferelationen für unitäre Gruppendarstellungen (Uncertainty relations for unitary group representations) by Melanie Rehders. As the question is so vague, there could not be one simple answer, but as a result of the thesis, one could say in a nutshell:
“The uncertainty principles are so different because the groups underlying wavelet-like transforms are semidirect products of a matrix group and and hence, the identity can not be an infinitesimal generator and hence, not be a commutator”.
In this post I’ll face the challenge to give some meaning to this sentence.
1. The abstract structure behind
Let me introduce the players in a diagram which I redraw from the thesis:
As you see, we need several algebraic structures (as well as analytical ones).
2. From group representations to integral transforms
First, we need a locally compact group , and naturally, this comes with a left invariant measure , which is called Haar measure. With these tool we can intergrate complex valued functions defined of the group: and we may also form the spaces .
Having the space , we can define a special representation of the group (remember that a group representation is a description of the group in terms of linear transformations of a vector space, in other words, a group homomorphism from to the space of linear mappings on some vector space ). The special representation we use the the so called left regular representation on the space of unitary operators on the space (denoted by . This representation is the mapping defined by
One easily checks, that this is a homomorphism and the unitarity follows from the left invariance of the Haar measure. One could say, that the group acts on the functions in in a unitary way. We now may define an integral transform as follows: For define
You may compare with the previous two posts, that this gives precisely the windowed fourier transform (for the Weyl-Heisenberg group) and the wavelet transform (for the affine group).
To have convenient properties for the integral transform one need some more conditions
- Irreducibility, i.e. that the only subspaces of which are invariant under every are and .
- Square integrability, i.e. that there exists a non-zero function such that
these functions are called admissible.
We have the following theorem: \href{Grossmann, Morlet, Paul}} Let be a unitary, irreducible, and square integrable representation of a locally compact group on and let be admissible. Then it holds that the mapping defined in (1) is a multiple of an isometry. Especially, has a left-inverse which is (up to a constant) given by its adjoint.
This somehow clarifies the arrow from “group representation” to “integral transform”.
3. From group representations to Lie algebra representations
For a closed linear group , i.e. a closed subgroup of , one has the associated Lie-Algebra defined with the help of the matrix exponential by
The corresponding Lie-bracket is the commutator:
If we now have a representation of our group on some Hilbert space (you may think of but here we may have any Hilbert space), we may ask if there is an associated representation of the Lie-Algebra . Indeed there is one which is called the derived representation. To formulate this representation we need the following subspace of :
Theorem 1 Let be a representation of a closed linear group in a Hilbert space . The mapping defined by
is a representation of the Lie-Algebra on the space .
This clarifies the arrow from “group representations” to Lie algebra representations.
4. Lie-algebra representations and uncertainty relations
We are now ready to illustrate the abstract path from Lie-algebra representations to uncertainty relations. This path uses the so called infinitesimal generators:
Definition 2 Let be a closed linear group with Lie algebra and let be a basis of . Let be a representation of on a complex Hilbert space and let the derived representation be injective. Then, the operators are called the infinitesimal generators of with respect to the representation .
These infinitesimal generators are always self-adjoint. Hence, we may apply Robertson’s uncertainty principle for every two infinitesimal generators for which the commutator does not vanish.
The abstract way, described in the Sections 2, 3 and 4 is precisely how we have derived the Heisenberg uncertainty principle and the affine uncertainty principle in the two previous posts. But now the question remains: Why are they so different?
The so-called commutator tables of the Lie-algebras shed some light on this:
Example 1 (The Heisenberg algebra) The associated Lie algebra to the Weyl-Heisenberg group is the real vector space with the Lie bracket
A basis of this Lie algebra is , , and the three commutators are
Two facts are important: There is an element which commutes with every other element. In other words: The center of the algebra is one-dimensional and spanned by one of the basis elements. If we remember the three infinitesimal generators , and for the windowed Fourier transform, we observe that they obey the same commutator relations (which is not a surprise…).
Example 2 (The “affine Lie algebra”) The Lie algebra of the affine group (with composition ) is with Lie bracket
A basis of the Lie algebra is , and the commutator is
Here, there is no element which commutes with everything, i.e. the center of the Lie algebra is trivial. Of course, the commutator relation resembles the one for the infinitesimal generators and for the wavelet transform.
5. Higher dimensional wavelets
Wavelets in higher dimensions are a bit tricky. If one thinks of groups acting on which consist of translation and some thing as dilation one observes that one basically deals with semidirect products of a subgroup of and : For and one may transform a function as
Indeed this the so called quasiregular representation of the semidirect product of and . Two important examples of 2-dimensional wavelet-like transformations are:
Example 3 The “standard” 2-dimensional wavelet transform. One takes the group
which is a combination of rotation and isotropic scaling. Another parametrization is:
where is the scaling factor and is the rotation angle.
Example 4 The shearlet transform bases of the group
which consists of anisotropic scaling by and “shear” by .
Doing some more algebra, one observes that the center of the associated Lie algebra of the semidirect product of the form (2) is always trivial and hence, the identity never appears as a commutator. This neat observation shows, that no wavlet-like transformation which bases on a group structure can ever have any uncertainty relation which behaves like
as in the Heiseberg case.
Although this may not be a groundbreaking discovery, this observation and the whole underlying algebra somehow cleared my view on this issue.
Like this:
Like Loading...
September 20, 2011
1. The affine group behind the wavelet transform
Continuing my previous post on the uncertainty principle for the windowed Fourier transform, we now come to another integral transform: The wavelet transform.
In contrast to the windowed Fourier transform (which analyzes a function with respect to position and frequency) the wavelet transform analyzes a function with respect to position and scale. For a given analyzing function and a signal , the wavelet transform is (for , ):
In the same way, the windowed Fourier transform could be written as inner products of with a translated and modulated window function, the wavelet transform can be written as inner products of with translated and scaled functions . And again, these modifications which happen to the analyzing function come from a group.
Definition 1 The affine group is the set endowed with the operation
Indeed this is group (with identity and inverse ). The name affine group stems from the fact that the group operation behaves like the composition of one dimensional affine linear functions: For and we have .
The affine group admits a representation on the space of unitary operators on :
(note the normalizing factor ).
2. The affine uncertainty principle
I am not sure who has to credited for the group theoretical background behind wavelets however, the two-part paper “Transforms associated to square integrable group representations” by Grossmann, Morlet and Paul has been influential (and can be found, e.g. in the compilation “Fundamental papers in wavelet theory” by Heil and Walnut.
As done by Stephan Dahlke and Peter Maass in “The Affine Uncertainty Principle in One and Two Dimensions” and can proceed in analogy to the windowed Fourier transform and the corresponding Weyl-Heisenberg group and compute the infinitesimal operators: Take the derivative of the representation with the respect to the group parameters and evaluate at the identity:
and
Again, these operators are skew adjoint and hence, multiplying by gives self-adjoint operators.
These operators and do not commute and hence, applying Robertson’s uncertainty principle gives an inequality. The commutator of and is
Robertson’s uncertainty principle reads as
and with some manipulation this turn to (for )
Again, one can derive the functions for which equality in attained and these are the functions of the form
for real . (By the way, these functions are indeed wavelets and sometimes called Cauchy-wavelets because of their analogy with the Cauchy kernel from complex analysis.)
By the way: These functions are necessarily complex valued. If one restricts oneself to real valued functions there is a simpler inequality, which one may call “real valued affine uncertainty”. First, observe that for real valued , and hence, the left hand side in (1) is zero (which make the inequality a bit pointless). Using that for real valued we have , and that together with for we obtain (with ) from (1)
Since we know that equality is only attained for the Cauchy wavelets (which are not real valued we can state:
Corollary 2 (Real valued affine uncertainty) For any real valued function which is in the domain of it holds that
As some strange curiosity, one can derive this “real valued affine uncertainty principle” by formal integration by parts and Cauchy-Schwarz inequality totally similar to the Heisenberg uncertainty principle (as I’ve done in my previous post):
Dividing by gives the “real valued affine uncertainty” (but only in the non-strict way).
Like this:
Like Loading...
September 16, 2011
Some years ago I became fascinated by uncertainty principles. I got to know them via signal processing and not via physics, although, from a mathematical point of view they are the same.
I recently supervised a Master’s thesis on this topic and the results clarified a few things for me which I used to find obscure and I’d like to illustrate this here on my blog. However, it takes some space to introduce notation and to explain what it’s all about and hence, I decided to write a short series of posts, I try to explain, what new insights I got from the thesis. Here comes the first post:
1. The Fourier transform and the windowed Fourier transform
Let’s start with an important tool from signal processing you all know: The Fourier transform. For the Fourier transform is
(I was tempted to say “whenever the integral is defined”. However, the details here are a little bit more involved, but I will not go into detail here; is defined for -functions, for functions and even for tempered distributions…) Roughly speaking, the Fourier transform decomposes a signal into its frequency components, which can be seen from the Fourier inversion formula:
i.e. the (complex) number says “how much the frequency (i.e. the function ) contributes to ”. In the context of signal processing one often speaks of the “time representation” and the “frequency representation” .
One drawback of the Fourier transform, when used to analyze signals, is its “global” nature in that the value depends on every value of , i.e. a change of in a small interval results a change of all of . A natural idea (which is usually attributed to Gabor) is, to introduce a window function which is supposed to be a bump function, centered at zero, then translate this function and “localize” by multiplying it with . The resulting transform
is called windowed Fourier transform, short-time Fourier transform or (in the case of ) Gabor transform.
Of course we can write the windowed Fourier transform in term of the usual Fourier transform as
In other words: The localization in time is precisely determined by the “locality” of , that is, how well is concentrated around zero. The better is concentrated around , the more “localized around ” is the information of , the windowed Fourier transform uses.
For the localization in frequency one obtains (by Plancherel’s formula and integral substitution) that
In other words: The localization in frequency is precisely determined by the “locality” of , that is, how well is concentrated around zero. The better is concentrated around , the more “localized around ” is the information of , the windowed Fourier transform uses.
Hence, it seems clear that a function is well suited as a window function, if it both well localized in time and frequency. If one measures the localization of a function around zero by its variance
,
then there is the fundamental lower bound on the product of the variance of a function and the variance of its Fourier transform, know under the name “Heisenberg uncertainty principle” (or, as I learned from Wikipedia, “Gabor limit”): For it holds that
Proof: A simple (not totally rigorous) proof goes like this: We use partial integration, the Cauchy-Schwarz inequality and the Plancherel formula:
Moreover, the inequality is sharp for the functions for . In this sense, these Gaussians are best suited for the windowed Fourier transform.
While this presentation was geared towards usability, there is a quite different approach to uncertainty principles related to integral transforms which uses the underlying group structure.
2. The group behind the windowed Fourier transform
The windowed Fourier transform (1) can also be seen as taking inner products of with the family of functions . This family is obtained from the single function by letting the so-called Weyl-Heisenberg group act on it:
Definition 1 The Weyl-Heisenberg group is the set endowed with the operation
The Weyl-Heisenberg group admits a representation of the space of unitary operators on , that is a map
It indeed the operators are unitary and it holds that
Moreover, the mapping is continuous for all , a property of the representation which is called strong continuity.
In this light, the windowed Fourier transform can be written as
Now there is a motivation for the uncertainty principle as follows: Associated to the Weyl-Heisenberg group there is the Weyl-Heisenberg algebra, a basis of which is given by the so called infinitesimal generators. These are, roughly speaking, the derivatives of the representation with respect to the group parameters, evaluated at the identity. In the Weyl-Heisenberg case:
and
and
(In the last case, my notation was not too good: Note that with and the derivative has to be taken with respect to .)
All these operators are skew adjoint on and hence the operators
are self adjoint.
For any two (possibly unbounded) operators on a Hilbert space there is a kind of abstract uncertainty principle (apparently sometimes known as Robertson’s uncertainty principle. It uses the commutator of two operators:
Theorem 2 For any two self adjoint operators and on a Hilbert space it holds that for any in the domain of definition of and any real numbers and it holds that
Proof: The proof simply consists of noting that
Now use Cauchy-Schwarz to obtain the result.
Looking closer at the inequalities in the proof, one infers in which cases Robertson’s uncertainty principle is sharp: Precisely if there is a real such that
Now the three self-adjoint operators , and have three commutators but since is a multiple of the identity and hence commutes with the others. I.e. there is only one commutator relation:
Hence, using , and in Robertson’s uncertainty principle gives (in sloppy notation)
which is exactly the Heisenberg uncertainty principle.
Moreover, by (2), equality happens if fulfills the differential equation
the solution of which are exactly the functions .
Since the Heisenberg uncertainty principle is such an impressing thing with a broad range of implications (a colleague said that its interpretation, consequences and motivation somehow form a kind of “holy grail” in some communities), one may try to find other cool uncertainty principles be generalizing the approach of the previous sections to other transformations.
In the next post I am going to write about other group related integral transforms and its “uncertainty principles”.
Like this:
Like Loading...
September 14, 2011
A few days ago I’ve been to IFIP TC7 and participated in the minisymposium “Optimization in Banach spaces with sparsity constraints” organized by Kristian Bredies and Christian Clason. My session was very interesting, consisting of talks of Masha Zhariy (Optimization algorithms for sparse reconstruction in inverse problems with adaptive stopping rules), Caroline Vehoeven (On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty) and Kristian (Inverse problems in spaces of measures). Unfortunately, I could not take part in the second half of the minisymposium.
Although the minisymposium was interesting, I’d like to talk about the plenary Talk “Signal and systems representation; signal spaces, sampling, and quantization effects” by Holger Boche. The title sounded as I should be able to follow, but I’ve never heard the name before. And indeed, Holger Boche gave a fresh view onto the sampling and reconstruction problem, i.e. the digital-to-analog conversion and its inverse.
It was somehow refreshing to have a talk about sampling which was totally free of the words “compressive” and “compressed”. Holger Boche modelled continuous time signals in the well known Bernstein and Paley-Wiener spaces and started with fairly old but not too well known results on exact reconstruction from non-uniform sampling.
Definition 1 For and , the Paley-Wiener space is
Definition 2 A function is of sine-type if it is of exponential type and
- its zeros are separated, and
- there exist real such that for all it holds that
Then, one theorem of reconstruction for non-uniform sampling points goes as follows:
Theorem 3 Let be a function of sine-type, whose zeros are all real and define
Then, for every and all it holds that
This theorem is a generalization of a result of J.L. Brown on local uniform convergence of the Shannon-sampling series.
These results are related to a sampling theorem due to Logan which I learned some years ago. First we recall that the Hilbert transform of a function can be defined via the Fourier transform as
Then, Logan’s Theorem is as follows:
Theorem 4 (Logan’s Sampling Theorem) Let such that the support of both and is contained in for some number and with . Then, if and do not have a root in common with their Hilbert transform, it holds that implies that is equal to up to a constant factor.
Proof: (Informal) Since modulation shifts the Fourier transform, we can write a function with as a modulated sum of functions and with bandwidth , using as follows
Expressing in terms of and gives
Note that and are real valued since is assumed to be real valued. Using and we conclude that
With the obvious notation we obtain that
Now, the zeros of are at least the common zeros of and . Moreover, the bandwidth of is at most (since the bandwidth of and is at most ). We can conclude that is identically zero by an argument which uses that upper density of the zeros of (resp. ) is high enough to force to zero. Hence, we know that
To finish the proof one needs some argument from complex analysis (using that both sides of the above equations are extendable to meromorphic function of exponetial type) to conclude that equals up to a constant factor.
When I asked about the relation between Logan’s theorem and his results, he replied that Logan’s theorem was the starting point since he was curious why there was no practical implementation of this sampling procedure available. More precisely, he was interested in designing a digital implementation, based on sampling points, which allows to digitize all linear time invariant filters. One of his negative results was the following.
Definition 5 A sequence is called a complete interpolation sequence for and if for every there is exactly one such that .
Theorem 6 For a complete interpolation sequence for and , the corresponding reconstruction functions and a given there always exists a stable linear time invariant filter and a signal such that
He conjectured (Conjecture 2.1 in his talk), that this phenomenon of point-wise divergence of the approximation of a linear time invariant filter remains, even if oversampling is applied, that there always is a function () such that point-wise divergence happens. Moreover, he conjectured that (Conjecture 2.2) digitizing stable linear time invariant filter should be possible if one replaced point-sampling by appropriate linear functionals and posed the open problem to both prove this conjecture and to find such linear functionals.
Like this:
Like Loading...
September 8, 2011
Although I am at ENUMATH since four days, I did not post any news yet.
Most talks here dealt with numerical methods for PDEs and since this is not my primary topic I sometimes had a hard time to grab what the problems and goals were. One exception was the talk by Franco Brezzi. Ha gave an entertaining talk entitled “To reconstruct or not to reconstruct?”. In a nutshell he talked about discretization methods for PDEs by either finite differences and finite elements. He distinguished the two methods by the fact that finite difference methods only use and produce “nodal values”, i.e. values of the solution at specific points. On the other hand, finite element methods also work with a set of values describing the solution. However, these numbers are coefficients to some basis functions and hence, one can “reconstruct” a true function from these values. His first point was, that methods with “reconstruction” usually have much simples proofs for convergence and so on. However, finite difference methods are usually much more simple to implement since the discretization itself explicitly dictates the linear system one has to solve. He then introduced “mimetic finite differences” which, in my incomplete understanding, are a kind of finite difference methods that incorporate more geometric information. During his talk I thought about phrasing his concepts of “reconstruction” and “evaluating” in the language of signal processing as “interpolation” and “sampling” and if this would give another perspective which could be helpful.
Like this:
Like Loading...
September 5, 2011
On my way to ENUMATH 11 in Leicester I stumbled upon the preprint Multi-parameter Tikhonov Regularisation in Topological Spaces by Markus Grasmair. The paper deals with fairly general Tikhonov functionals and its regularizing properties. Markus considers (nonlinear) operators between two set and and analyzes minimizers of the functional
The functionals and play the roles of a similarity measure and regularization terms, respectively. While he also treats the issue of noise in the operator and the multiple regularization terms, I was mostly interested in his approach to the general similarity measure. The category in which he works in that of topological spaces and he writes:
“Because anyway no trace of an original Hilbert space or Banach space structure is left in the formulation of the Tikhonov functional […], we will completely discard all assumption of a linear structure and instead consider the situation, where both the domain and the co-domain of the operator are mere topological spaces, with the topology of defined by the distance measure .”
The last part of the sentence is important since previous papers often worked the other way round: Assume some topology in and then state conditions on . Nadja Worliczek observed in her talk “Sparse Regularization with Bregman Discrepancy” at GAMM 2011 that it seems more natural to deduce the topology from the similarity measure and Markus took the same approach. While Nadja used the notion of “initial topology” (that is, take the coarsest topology that makes the functionals continuous), Markus uses the following family of pseudo-metrics: For define
Unfortunately, the preprint is a little bit too brief for me at this point and I did not totally get what he means with “the topology induced by the uniformity induced by the pseudo-metric”. Also, I am not totally sure if “pseudo-metric” is unambiguous.. However, the topology he has in mind seems to be well suited in the sense that if for all . Moreover, the condition that iff implies that is Hausdorff. It would be good to have a better understanding on how the properties of the similarity measure are related to the properties of the induced topology. Are there examples in which the induced topology is both different from usual norm and weak topologies and also interesting?
Moreover, I would be interested, in the relations of the two approaches: via “uniformities” and the initial topology…
Like this:
Like Loading...