A few days ago I’ve been to IFIP TC7 and participated in the minisymposium “Optimization in Banach spaces with sparsity constraints” organized by Kristian Bredies and Christian Clason. My session was very interesting, consisting of talks of Masha Zhariy (Optimization algorithms for sparse reconstruction in inverse problems with adaptive stopping rules), Caroline Vehoeven (On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty) and Kristian (Inverse problems in spaces of measures). Unfortunately, I could not take part in the second half of the minisymposium.

Although the minisymposium was interesting, I’d like to talk about the plenary Talk “Signal and systems representation; signal spaces, sampling, and quantization effects” by Holger Boche. The title sounded as I should be able to follow, but I’ve never heard the name before. And indeed, Holger Boche gave a fresh view onto the sampling and reconstruction problem, i.e. the digital-to-analog conversion and its inverse.

It was somehow refreshing to have a talk about sampling which was totally free of the words “compressive” and “compressed”. Holger Boche modelled continuous time signals in the well known Bernstein and Paley-Wiener spaces and started with fairly old but not too well known results on exact reconstruction from non-uniform sampling.

**Definition 1** * For and , the ***Paley-Wiener space** is

*
** *

**Definition 2** * A function is of ***sine-type** if it is of exponential type and

*
*
- its zeros are separated, and
- there exist real such that for all it holds that

* *

Then, one theorem of reconstruction for non-uniform sampling points goes as follows:

**Theorem 3** * Let be a function of sine-type, whose zeros are all real and define *

*
* Then, for every and all it holds that

* *

This theorem is a generalization of a result of J.L. Brown on local uniform convergence of the Shannon-sampling series.

These results are related to a sampling theorem due to Logan which I learned some years ago. First we recall that the Hilbert transform of a function can be defined via the Fourier transform as

Then, Logan’s Theorem is as follows:

**Theorem 4 (Logan’s Sampling Theorem)** * Let such that the support of both and is contained in for some number and with . Then, if and do not have a root in common with their Hilbert transform, it holds that implies that is equal to up to a constant factor. *

*Proof:* (Informal) Since modulation shifts the Fourier transform, we can write a function with as a modulated sum of functions and with bandwidth , using as follows

Expressing in terms of and gives

Note that and are real valued since is assumed to be real valued. Using and we conclude that

With the obvious notation we obtain that

Now, the zeros of are at least the common zeros of and . Moreover, the bandwidth of is at most (since the bandwidth of and is at most ). We can conclude that is identically zero by an argument which uses that upper density of the zeros of (resp. ) is high enough to force to zero. Hence, we know that

To finish the proof one needs some argument from complex analysis (using that both sides of the above equations are extendable to meromorphic function of exponetial type) to conclude that equals up to a constant factor.

When I asked about the relation between Logan’s theorem and his results, he replied that Logan’s theorem was the starting point since he was curious why there was no practical implementation of this sampling procedure available. More precisely, he was interested in designing a digital implementation, based on sampling points, which allows to digitize all linear time invariant filters. One of his negative results was the following.

**Definition 5** * A sequence is called a ***complete interpolation sequence** for and if for every there is exactly one such that .

**Theorem 6** * For a complete interpolation sequence for and , the corresponding reconstruction functions and a given there always exists a stable linear time invariant filter and a signal such that *

*
** *

He conjectured (Conjecture 2.1 in his talk), that this phenomenon of point-wise divergence of the approximation of a linear time invariant filter remains, even if oversampling is applied, that there always is a function () such that point-wise divergence happens. Moreover, he conjectured that (Conjecture 2.2) digitizing stable linear time invariant filter should be possible if one replaced point-sampling by appropriate linear functionals and posed the open problem to both prove this conjecture and to find such linear functionals.

### Like this:

Like Loading...