In a recent post I wrote about time varying channels. These were operators, described by the spreading function ${s:{\mathbb R}\times{\mathbb R}\rightarrow{\mathbb C}}$, mapping an input signal ${x}$ to an output signal ${y}$ via

$\displaystyle y(t) = H_s x(t) = \int_{{\mathbb R}}\int_{{\mathbb R}} s(\tau,\nu)x(t-\tau)\mathrm{e}^{\mathrm{i}\nu t}d\nu d\tau.$

1. The problem statement

In this post I write about the problem of identifying the channels from input-output measurements. More precisely, the question is as follows:

Is it possible to choose a single input signal such that the corresponding output signal allows the computation of the spreading function?

At first sight this seems hopeless: The degrees of freedom we have is a single function ${x:{\mathbb R}\rightarrow{\mathbb C}}$ and we look for a function ${s:{\mathbb R}\times{\mathbb R}\rightarrow{\mathbb C}}$. However, additional assumptions on ${s}$ may help and indeed there is the following identification theorem due to Kailath:

Theorem 1 (Kailath) The channel ${H_s}$ can be identified if the support of the spreading function ${s}$ is contained in a rectangle with area smaller than ${2\pi}$.

Probably this looks a bit weird: The size of the support shall allow for identification? However, after turning the problem a bit around one sees that there is an intuitive explanation for this theorem which also explains the constant ${2\pi}$.

2. Translation to integral operators

The first step is, to translate the problem into one with integral operators of the form

$\displaystyle F_k x(t) = \int_{\mathbb R} k(t,\tau)x(\tau)d\tau.$

We observe that ${k}$ is linked to ${s}$ via

$\displaystyle k(t,\tau) = \int s(t-\tau,\nu)e^{i\nu t}d\nu = (2\pi)^{1/2}\mathcal{F}_2^{-1}(s(t-\tau,\cdot))(t). \ \ \ \ \ (1)$

and hence, identifying ${s}$ is equivalent to identifying ${k}$ (by the way: I use the same normalization of the Fourier transform as in this post).

From my point of view, the formulation of the identification problem is a bit clearer in the form of the integral operator ${F_k}$. It very much resembles “matrix-vector multiplication”: First you multiply ${k(t,\tau)}$ with you input signal ${x}$ in ${\tau}$ (forming “pointwise products of the rows and the input vector” as in matrix-vector multiplication) and then integrate with respect to ${t}$ (“summing up” the result).

3. Sending Diracs through the channel

Now, let’s see, if we can identify ${k}$ from a single input-output measurement. We start, and send a single Dirac impulse through the channel: ${x = \delta_0}$. This formally gives

$\displaystyle y(t) = \int_{\mathbb R} k(t,\tau)\delta_0(\tau)d\tau = k(t,0).$

Ok, this is not too much. From the knowledge of ${y}$ we can directly infer the values ${k(t,0)}$ but that’s it – nothing more.

But we could also send a Dirac at a different position ${\tau_0}$, ${x = \delta_{\tau_0}}$ and obtain

$\displaystyle y(t) = \int_{\mathbb R} k(t,\tau)\delta_{\tau_0}(\tau)d\tau = k(t,\tau_0).$

Ok, different Diracs give us information about ${k}$ for all ${t}$ but only for a single ${\tau_0}$. Let’s send a whole spike train (or Dirac comb) of “width” ${c>0}$: ${x = \Delta_c = \sum_{n\in{\mathbb Z}} \delta_{cn}}$. We obtain

$\displaystyle y(t) = \int_{\mathbb R} k(t,\tau)\sum_{n\in{\mathbb Z}}\delta_{nc}(\tau)d\tau = \sum_{n\in{\mathbb Z}}k(t,cn).$

Oh – this does not reveal a single value of ${k}$ but “aggregated” values…

4. Choosing the right spike train

Now let’s translate the one condition on ${s}$ we have to a condition on ${k}$: The support of ${s}$ is contained in a rectangle with area less then ${2\pi}$. Let’s assume that this rectangle in centered around zero (which is no loss of generality) and denote it by ${[-a/2,a/2]\times [-b/2,b/2]}$, i.e. ${s(\tau,\nu) = 0}$ if ${|\tau|>a/2}$ or ${|\nu|>b/2}$ (and of course ${ab<2\pi}$). From (1) we conclude that

$\displaystyle k(t,\tau) = 0\ \text{ for }\ |t-\tau|>a/2.$

In the ${(t,\tau)}$-plane, the support looks like this:

If we now send this spike train in the channel, we can visualize the output as follows:

We observe that in this case we can infer the values of ${k(t,nc)}$ at some points ${t}$ but at other points we have an overlap and only observe a sum of ${k(t,nc)}$ and ${k(t,(n+1)c)}$. But if we make ${c}$ large enough, namely larger than ${a}$, we identify the values of ${k(t,nc)}$ exactly in the output signal!

5. Is that enough?

Ok, now we know something about ${k}$ but how can we infer the rest of the values? Don’t forget, that we know that ${ab < 2\pi}$ and that we know that ${s(\tau,\nu)}$ is supported in ${[-a/2,a/2]\times[-b/2,b/2]}$. Up to now we only used the value of ${a}$. How does the support limitation if ${\nu}$ translates to ${k}$? We look again at (1) and see: The function ${\tilde k_t:\tau\mapsto k(t,t-\tau)}$ is bandlimited with bandwidth ${b/2}$ for every ${t}$. And what do we know about these functions ${\tilde k_t}$? We know the samples ${k(t,nc) = \tilde k_t(t-nc)}$. In other words, we have samples of ${\tilde k_t}$ with the sampling rate ${c}$. But there is the famous Nyquist-Shannon sampling theorem:

Theorem 2 (Nyquist-Shannon) A function ${f:{\mathbb R}\rightarrow{\mathbb C}}$ which is bandlimited with bandwidth ${B}$ is totally determined by its discrete time samples ${f(n\pi/B)}$, ${n\in{\mathbb Z}}$ and it holds that

$\displaystyle f(t) = \sum_{n\in{\mathbb Z}} f(n\pi/N)\textup{sinc}\Big(\tfrac{B}{\pi}(x - \tfrac{n\pi}{B})\Big).$

We are happy with the first assertion: The samples totally determine the function if they are dense enough. In our case the bandwidth of ${\tilde k_t}$ is ${b/2}$ and hence, we need the sampling rate ${2\pi/b}$. What we have, are samples with rate ${c}$.

We collect our conditions on ${c}$: We need ${c}$

• larger than ${a}$ to separate the values of ${k(t,nc)}$ in the output.
• smaller than ${2\pi/b}$ to determine the full functions ${\tilde k_t(\tau) = k(t,t-\tau)}$.

Both together say

$\displaystyle a < c < \frac{2\pi}{b}.$

Such a ${c}$ exists, if ${ab < 2\pi}$ and we have proven Theorem 1.

6. Concluding remarks

The proof of Kailath’s theorem reveal the role of the constraint ${ab <2\pi}$: We want ${a}$ not to be too large to be able to somehow separate the values of ${k(t,nc)}$ in the output measurement and we need ${b}$ not too large to ensure that we can interpolate the values ${k(t,nc)}$ exactly.

However, a severe drawback of this results is that one needs to know ${a}$ to construct the “sensing signal” which was the spike train ${\Delta_c}$. Moreover, this sensing signal has itself an infinite bandwidth which is practically not desirable. It seems to be an open question whether there are more practical sensing signals.

There is more known about sensing of time varying channels: The support of ${s}$ does not have to be in a rectangle, it is enough if the measure of the support is smaller than ${2\pi}$ (a result usually attributed to Bello). Moreover, there are converse results which say that linear and stable identification is only possible under this restriction on the support size (see the work of Götz Pfander and coauthors).