If you want to have a sparse solution to a linear system of equation and have heard of compressed sensing or sparse reconstruction than you probably know what to do: Get one of the many solvers for Basis Pursuit and be happy.
Basis Pursuit was designed as a convex approximation of the generally intractable problem of finding the sparsest solution (that is, the solution with the smallest number of non-zero entries). By abuse of notation, we define for
(Because of some people prefer the, probably more correct but also more confusing, notation …).
Then, the sparsest solution of is the solution of
and Basis Pursuit replaces with “the closest convex proxy”, i.e.
The good thing about Basis Pursuit suit is, that is really gives the sparsest solution under appropriate conditions as is widely known nowadays. Here I’d like to present two simple examples in which the Basis Pursuit solution is
- not even close to the sparsest solution (by norm).
- not sparse at all.
1. A small bad matrix
We can build a bad matrix for Basis Pursuit, even in the case : For a small define
Of course, the sparsest solution is
while the solution of Basis Pursuit is
The summarize: For
(There is also a least squares solution that has three non-zero entries and a one-norm slightly larger than 2.)
Granted, this matrix is stupid. Especially, its first column has a very small norm compared to the others. Ok, let’s construct a matrix with normalized columns.
2. A small bad matrix with normalized columns
Fix an integer and a small . We define a -matrix
Ok, the first two columns do not have norm 1 yet, so we normalize them by multiplying with the right constant
(which is close to ) to get
Now we take the right hand side
and see what solutions to are there.
First, there is the least squares solution . This has only non-zero entries, the last entries are slightly smaller than and the first two are between and , hence, (in fact, slightly larger).
Second, there is a very sparse solution
This has two non-zero entries and a pretty large one-norm: .
Third there is a solution with small one-norm:
We have non-zero entries and . You can check that this is also the unique Basis Pursuit solution (e.g. by observing that is an element of and that the first two entries in are strictly smaller than 1 and positive – put differently, the vector is dual certificate for ).
To summarize, for it holds that
The geometric idea behind this matrix is as follows: We take simple normalized columns (the identity part in ) which sum up to the right hand side . Then we take two normalized vectors which are almost orthogonal to but have in their span (but one needs huge factors here to obtain ).
Well, this matrix looks very artificial and indeed it’s constructed for one special purpose: To show that minimal -norm solution are not always sparse (even when a sparse solution exists). It’s some kind of a hobby for me to construct instances for sparse reconstruction with extreme properties and I am thinking about a kind of “gallery” of these instances (probably extending the “gallery” command in Matlab).
By the way: if you want to play around with this matrix, here is the code
n = 100; epsilon = sqrt(8/(n^2-n))+0.1; c = 1/sqrt(2+n*epsilon^2/4); A = zeros(n,n+2); A(1:2,1:2) = ([1 -1;-1,1]+epsilon/2)*c; A(3:n,1:2) = epsilon/2*c; A(1:n,3:n+2) = eye(n); b = ones(n,1);